SIGGRAPH 2018 tempoGAN talk on YouTube

You Xie and Rachel Chu just successfully presented their paper “tempoGAN: A Temporally Coherent, Volumetric GAN for Super-resolution Fluid Flow” at SIGRGAPH 2018. You can enjoy their full presentation via the YouTube video below.

Presentation:

More info regarding the SIGGRAPH 2018 technical papers:

Technical Papers

 

 

tempoGAN paper abstract:

We propose a temporally coherent generative model addressing the super-resolution problem for fluid flows. Our work represents the first approach to synthesize four-dimensional physics fields with neural networks. Based on a conditional generative adversarial network that is designed for the inference of three-dimensional volumetric data, our model generates consistent and detailed results by using a novel temporal discriminator, in addition to the commonly used spatial one. Our experiments show that the generator is able to infer more realistic high-resolution details by using additional physical quantities, such as low-resolution velocities or vorticities. Besides improvements in the training process and in the generated outputs, these inputs offer means for artistic control as well. We additionally employ a physics-aware data augmentation step, which is crucial to avoid overfitting and to reduce memory requirements. In this way, our network learns to generate advected quantities with highly detailed, realistic, and temporally coherent features. Our method works instantaneously, using only a single time-step of low-resolution fluid data. We demonstrate the abilities of our method using a variety of complex inputs and applications in two and three dimensions.

 

tempoGAN source code now online

The source code for our tempoGAN project, which aims for super-resolution inference of Navier-Stokes transport phenomena (such as smoke clouds) is online now at: https://github.com/thunil/tempoGAN

It comes with a readme, data generation scripts, and should give an easy starting point for training generative adversarial nets for fluids. If you try it, let us know how it works!

Project page

Here’s again the full abstract of the paper: We propose a temporally coherent generative model addressing the super-resolution problem for fluid flows. Our work represents the first approach to synthesize four-dimensional physics fields with neural networks. Based on a conditional generative adversarial network that is designed for the inference of three-dimensional volumetric data, our model generates consistent and detailed results by using a novel temporal discriminator, in addition to the commonly used spatial one. Our experiments show that the generator is able to infer more realistic high-resolution details by using additional physical quantities, such as low-resolution velocities or vorticities. Besides improvements in the training process and in the generated outputs, these inputs offer means for artistic control as well. We additionally employ a physics-aware data augmentation step, which is crucial to avoid overfitting and to reduce memory requirements. In this way, our network learns to generate advected quantities with highly detailed, realistic, and temporally coherent features. Our method works instantaneously, using only a single time-step of low-resolution fluid data. We demonstrate the abilities of our method using a variety of complex inputs and applications in two and three dimensions.

Best-paper award for “Coupled Fluid Density and Motion from Single Views” at SCA 2018

Congratulations to Marie-Lena Eckert for winning the best-paper award at SCA 2018 for her submission “Coupled Fluid Density and Motion from Single Views”.

Her work aims for reconstructing fluid flow phenomena with strong Navier-Stokes priors. This makes it possible to compute dense flow fields based on only a monocular video, i.e., an image sequence from a single viewpoint.

Paper Abstract: We present a novel method to reconstruct a fluid’s 3D density and motion based on just a single sequence of images. This is rendered possible by using powerful physical priors for this strongly under-determined problem. More specifically, we propose a novel strategy to infer density updates strongly coupled to previous and current estimates of the flow motion. Additionally, we employ an accurate discretization and depth-based regularizers to compute stable solutions. Using only one view for the reconstruction reduces the complexity of the capturing setup drastically and could even allow for online video databases or smart-phone videos as inputs. The reconstructed 3D velocity can then be flexibly utilized, e.g., for re-simulation, domain modification or guiding purposes.

More information about the ACM SIGGRAPH / Eurographics Symposium on Computer Animation: (SCA): http://sca2018.inria.fr

Coupled Fluid Density and Motion from Single Views

 

Latent-space Physics source code

The source code for our latent space physics paper is online now:
https://github.com/wiewel/LatentSpacePhysics

It contains both the Navier-Stokes solver for data generation (based on mantaflow http://mantaflow.com), and the keras code (for tensorflow https://www.tensorflow.org) for training the autoencoder and LSTM networks.

The preprint can be found here: https://arxiv.org/pdf/1802.10123

Paper Abstract

Our work explores methods for the data-driven inference of temporal evolutions of physical functions with deep learning techniques. More specifically, we target fluid flow problems, and we propose a novel network architecture to predict the changes of the pressure field over time. The central challenge in this context is the high dimensionality of Eulerian space-time data sets. Key for arriving at a feasible algorithm is a technique for dimensionality reduction based on convolutional neural networks, as well as a special architecture for temporal prediction. We demonstrate that dense 3D+time functions of physics system can be predicted with neural networks, and we arrive at a neural-network based simulation algorithm with practical speed-ups. We demonstrate the capabilities of our method with a series of complex liquid simulations, and with a set of single-phase simulations. Our method predicts pressure fields very efficiently. It is more than two orders of magnitudes faster than a regular solver. Additionally, we present and discuss a series of detailed evaluations for the different components of our algorithm.

Droplets with Neural Networks

The final version of our paper on learning droplet formation models with neural networks is online now. It will be presented at SCA 2018 in Paris.

Our paper proposes a new data-driven approach to model detailed splashes for liquid simulations with neural networks. Our model learns to generate small-scale splash detail for the fluid-implicit-particle method using training data acquired from physically parameterized, high resolution simulations. We use neural networks to model the regression of splash formation using a classifier together with a velocity modifier. For the velocity modification, we employ a heteroscedastic model. We evaluate our method for different spatial scales, simulation setups, and Navier-Stokes solvers. Our simulation results demonstrate that our model significantly improves visual fidelity with a large amount of realistic droplet formation and yields splash detail much more efficiently than finer discretizations.

https://arxiv.org/abs/1704.04456

Further information

 

Volumetric Fluid Flow from a Single Video

Our SCA 2018 paper on Coupled Fluid Density and Motion from Single Views is online now! You can check it out here. We’re reconstructing a dense fluid flow field from a single video stream using a strong Navier-Stokes prior.

To be presented at: http://sca2018.inria.fr

Full Abstract
We present a novel method to reconstruct a fluid’s 3D density and motion based on just a single sequence of images. This is rendered possible by using powerful physical priors for this strongly under-determined problem. More specifically, we propose a novel strategy to infer density updates strongly coupled to previous and current estimates of the flow motion. Additionally, we employ an accurate discretization and depth-based regularizers to compute stable solutions. Using only one view for the reconstruction reduces the complexity of the capturing setup drastically and could even allow for online video databases or smart-phone videos as inputs. The reconstructed 3D velocity can then be flexibly utilized, e.g., for re-simulation, domain modification or guiding purposes. We will demonstrate the capacity of our method with a series of synthetic test cases and the reconstruction of real smoke plumes captured with a Raspberry Pi camera.

New paper online: Deep Fluids – A Generative Network for Parameterized Fluid Simulations

Our paper “Deep Fluids: A Generative Network for Parameterized Fluid Simulations” in collaboration with the CGL of ETH Zurich is online now!

You can check it out here:
http://www.byungsoo.me/project/deep-fluids/
https://arxiv.org/abs/1806.02071

The goal is a novel generative model to synthesize fluid simulations from a set of reduced parameters. A convolutional neural network is trained on a collection of discrete, parameterizable fluid simulation velocity fields. We also demonstrate that we can handle complex parameterizations in reduced spaces, and advance simulations in time by integrating in the latent space with a second network.

Computer Game Laboratory, SS’18 up and running

The current instance of our computer game laboratory, a practical course for TUM games engineering students, is ongoing at the moment, you can check out the latest development of the four game projects on our wiki main page.

Look out for the final presentations and playable demos during the SS’18 demo day…

One interesting new development here is a first joint team between TUM and students from the Media Design Hochschule Muenchen (MDH).

You can also find a full list of previous games here.

 

Ferienakademie 2018 – Accelerating Physics Simulations with Deep Learning

Computer simulations are a powerful method to study physical and engineering systems such as fluids, collections of molecules, or social agents. Traditionally, differential equations such as the Navier-Stokes equations form the basics of performing simulations as they dictate the time evolution. A recent, promising development is to use machine learning for model-free prediction of a systems’ behavior and to identify the appearance of spatio-temporal patterns. Deep learning with neural networks is a particularly interesting and powerful machine learning method that can be employed for this task.

More info can be found here.

TEDx talk by Nils Thuerey

Nils recently gave a talk titled “Deep Learning Beyond Cats and Dogs” at a TEDx event organized at the Technical University of Munich.

Abstract: Deep learning, which is seemingly everywhere these days, is well-known for its capability to recognize cats and dogs in internet images, but it can and should be used for other things too. It can be used to figure out the complicated physics that dictate fluid behavior. Actually, simulating turbulence is not only a million dollar problem (really, google it!) but it can help us create more realistic virtual worlds. It can even help us understand medical and physiological behaviors like blood flowing through our body. Nils performs cutting-edge research and explains how neural networks are well on their way to becoming the fourth pillar of science.

Biography: Nils Thuerey’s work is in the field of computer graphics: he models physical behaviors of fluids such as water and smoke to enable computer created virtual effects to look like the real thing. These phenomena are very expensive to simulate computationally, so Nils’ research explores the use of deep learning methods to generate the effects more quickly and more realistically. Before assuming his assistant professor position at TUM, Nils studied in Erlangen, held a post-doc position in Zurich, and worked in the visual effects industry. He was awarded a technical Oscar for the development of an algorithm which aids in editing explosion and smoke effects for film.

This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx , or visit the TEDxTUM website.

 

Recent research on physics-based deep learning

In the following we give an overview of our recent publications on physics-based deep learning methods. In particular, we focus on solving various aspects of fluid problems modeled with the Navier-Stokes (NS) equations. These topics are a central theme of the research work in our group. While, naturally, the long term goal would be to simply give the initial conditions of a problem as input to a neural network, and then rely on the network to infer the solution with appropriate accuracy, the complexity of the NS equations makes this an extremely challenging problem.
Thus, we typically consider constrained solution spaces for sub-problems of the NS equations, that we believe are nonetheless very interesting, and useful for their respective domains. In this way, we are also working towards improving the state of the art in order to tackle more and more general problems in the future.
Our publications have targeted different aspects of a typical simulation pipeline, and differ in terms of how deeply integrated they are into the Navier-Stokes solve. The following list is order from loose to tight couplings. E.g., the last entry completely replaces a regular solver.

mantaflow in Blender at BCon17

Sebastian Barschkis, CS-student at TUM, has just presented his latest progress regarding the integration of our mantaflow solver into Blender. You can check out his full presentation including insights about code structure as well as using the solver here:

There are admittedly still some rough edges, but mantaflow should give Blender users a significant step forward in terms of visual quality and performance.

Code for CNN-based flow descriptors is online

The full source code of our recent SIGGRAPH paper coupling fluid simulations with convolutional neural-networks is finally online now! You can check it out here:

https://github.com/RachelCmy/mantaPatch

The code uses our mantaflow framework for the Navier-Stokes simulation part, and Google’s tensorflow framework for the deep learning portion. You can find a short introduction / how-to on the github page above. If you give it a try, let us know how it works!

The corresponding paper is this one
“Data-Driven Synthesis of Smoke Flows with CNN-based Feature Descriptors”, by Rachel Chu and Nils Thuerey.

Ferienakademie 2017: Course on Machine Learning and Fluids

We just completed the Ferienakademie 2017 course on Neural Networks for Fluid Simulations. In total, 17 participants prepared short presentations about research highlights, and then worked hard on an own implementation of various generative neural networks for two dimensional fluid flow. This course was jointly organized by Nils Thuerey (TUM), Michael Engel (FAU), and Miriam Mehl (Universitaet Stuttgart).

As part of the course, the participants were able to gain first-hand experience with deep learning algorithms, and explore connecting these algorithms with problems from the area of physical simulations. We used the tensorflow framework (https://www.tensorflow.org) for the deep learning part, and our own mantaflow solver (http://www.mantaflow.com) for the Navier-Stokes simulations.

The Ferienakademie (https://www.ferienakademie.de) is a long-established institution at TUM. It takes place every year in the Sarntal in “Alto Adige”, i.e., South Tyrol, and this year actually was its 34st instance. Highly recommended for motivated students that are interested in going beyond the standard curricula of a university, and who have a certain affinity for hiking, of course 🙂

Here’s a photo of this year’s participants – after all the hard work & hiking were done…

 

New tutorial on simple neural nets for fluid flow

You can now find a second mantaflow-tensorflow tutorial on our webpage. It explains the example0 code of mantaflow, which represents an example that’s as simple as possible: a very simple mantaflow scene that generates some flow data, and a simple tensorflow setup that trains a simple neural network with this data.

Despite its simplicity, it contains all the important parts: data wrangling, network training, and result generation. Also, it’ll demonstrate how much you can get out of a fifty-dimensional latent space from a simple NN auto encoder.

Curious how these weird wisps of smoke were created? Check out the full tutorial here..

 

Bringing together mantaflow and blender

Sebastian Barschkis, a TUM computer science student, just successfully finished his Google summer-of-code project, pushing the integration of our fluid solver mantaflow into the open-source animation package blender (https://www.blender.org) a step further. In his project, the main goals were a secondary particle extension (for splash & foam particles of liquids) and the integration of our primal-dual guiding optimization (see the full paper here). Hopefully, that moves us yet another step closer to releasing mantaflow as part of an official blender release, 2.8 hopefully!

More detailed documentation and info can be found on Sebastian’s wiki page:

https://wiki.blender.org/index.php/User:Sebbas/GSoC_2017

And the official summer of code project page can be found here:

https://summerofcode.withgoogle.com/dashboard/project/5227204714692608/overview/


Coupling mantaflow with tensorflow – an introduction

We’ve just posted a first introduction in a series on how to couple mantaflow and fluid sims with tensorflow and deep learning algorithms.

The latest release (v0.11) of mantaflow comes with a set of data-transfer functions to exchange data between the two frameworks, and provides three examples with varying complexity. The page below gives an introduction to mantaflow-tensorflow coupling, and an overview of the data transfer functions. A more in-depth discussions of the three coupling examples will follow in the next weeks.

Get started here…

SIGGRAPH 2017: Data-driven methods and deep learning for fluids

Our recent research papers were just presented in Los Angeles at the ACM SIGGRAPH conference, and ACM has just posted the videos of two of the presentations online. You can view them under the following link (starting at 28:00):

https://www.youtube.com/watch?v=TMtd-IBl46g

The two papers presented here focus on data-driven fluid simulations and simulation algorithms powered by deep learning. The latter proposes a method to match pre-computed space-time patches of flow data using a convolutional neural network. This network can robustly establish correspondences between new simulations and the pre-computed entries in the repository. In particular, the network learns to take into account the effects of numerical viscosity, which are otherwise extremely difficult to predict. This is a good example of how deep learning techniques can extend and improve traditonal techniques for numerical simulations.

The second method targets the complex behavior of liquid simulations. It employs 5D optial flow solves to robustly register the potentially very different space-time surfaces of liquid simulations. This registration can afterwards be used to smoothly interpolate between different simulations without loosing too much detail. The talk also discusses how this approach can be extended by deep learning: we use a convolutional neural network to generate a second deformation to take into account the full behavior of the liquid space region under consideration. The trained network is fast enough to be executed interactively on a regular mobile phone. Once the space-time surface is deformed with the OF and network deformations, it can be rendered from arbitrary viewpoints very efficiently.

Below you can find the full abstracts of all three papers, and links to the corresponding pages.

Data-Driven Synthesis of Smoke Flows with CNN-based Feature Descriptors

We present a novel deep learning algorithm to synthesize high resolution flow simulations with reusable repositories of space-time flow data. In our work, we employ a descriptor learning approach to encode the similarity between fluid regions with differences in resolution and numerical viscosity. We use convolutional neural networks to generate the descriptors from fluid data such as smoke density and flow velocity. At the same time, we present a deformation limiting patch advection method which allows us to robustly track deformable fluid regions. With the help of this patch advection, we generate stable space-time data sets from detailed fluids for our repositories. We can then use our learned descriptors to quickly localize a suitable data set when running a new simulation. This makes our approach very efficient, and resolution independent. We will demonstrate with several examples that our method yields volumes with very high effective resolutions, and non-dissipative small scale details that naturally integrate into the motions of the underlying flow.

Interpolations of Smoke and Liquid Simulations

We present a novel method to interpolate smoke and liquid simulations in order to perform data-driven fluid simulations. Our approach calculates a dense space-time deformation using grid-based signed-distance functions of the inputs. A key advantage of this implicit Eulerian representation is that it allows us to use powerful techniques from the optical flow area. We employ a five-dimensional optical flow solve. In combination with a projection algorithm, and residual iterations, we achieve a robust matching of the inputs. Once the match is computed, arbitrary in between variants can be created very efficiently. To concatenate multiple long-range deformations, we propose a novel alignment technique. Our approach has numerous advantages, including automatic matches without user input, volumetric deformations that can be applied to details around the surface, and the inherent handling of topology changes. As a result, we can interpolate swirling smoke clouds, and splashing liquid simulations. We can even match and interpolate phenomena with fundamentally different physics: a drop of liquid, and a blob of heavy smoke.

Pre-computed Liquid Spaces with Generative Neural Networks

Liquids exhibit complex non-linear behavior under changing simulation conditions such as user interactions. We propose a method to map this complex behavior over a parameter range onto reduced representation based on space-time deformations. In order to represent the complexity of the full space of inputs, we leverage the power of generative neural networks to learn a reduced representation. We introduce a novel deformation-aware loss function, which enables optimization in the highly non-linear space of multiple deformations. To demonstrate the effectiveness of our approach, we showcase the method with several complex examples in two and four dimensions. Our representation makes it possible to generate implicit surfaces of liquids very efficiently, which makes it possible to display the scene from any angle, and to add secondary effects such as particle systems. We have implemented a mobile application for our full output pipeline to demonstrate that real-time interaction is possible with our approach.

mantaflow v0.11

The new version of mantaflow is online! mantaflow is our open-source framework targeted at fluid simulation research in Computer Graphics. We’re especially working on making mantaflow a convenient platform for fluids and deep learning. The new release contains a first set of tools and examples to get started. We will post more in-depths tutorials here in the coming weeks.

In addition, the new release supports surface tension forces, e.g., for simulating small droplets, and a viscosity solve for thicker materials and physically more accurate simulations. The fast multigrid solver is another highlight. It allows for efficient calculations of large-scale effects.

Here’s an incomplete feature list:

  • multigrid-preconditioned solver
  • Eulerian simulation using MAC Grids, PCG pressure solver and MacCormack advection
  • Flexible particle systems
  • FLIP simulations for liquids
  • Surface mesh tracking
  • Free surface simulations with levelsets, fast marching
  • Wavelet and surface turbulence
  • K-epsilon turbulence modeling and synthesis
  • Maya and Blender export for rendering
  • tensorflow couping via numpy arrays

Btw., mantaflow has been used in numerous publications! Among others:

Fig. 1: A few images of a controlled smoke simulation using the PD-guiding feature of mantaflow.