- CNN-patches: this method is relatively de-coupled from the solving process. We compute flow descriptors, that encode flow properties in an “as-invariant-as-possible” manner. We use these to look up pre-computed patches of 4D data.
- ML-FLIP: this data-driven model captures sub-grid scale formation of droplets for liquid simulations.
- tempoGAN: this GAN approach directly synthesizes a temporally coherent state of an advected quantity such as smoke.
- Latent-space physics: this paper focuses on pressure fields over time. In contrast to the others, it predicts the temporal evolution of pressure in the latent-space of an encoder network.
- Neural Liquid Drop: this method captures full solutions for classes of liquid problems in terms of space-time deformations. As such, it directly generates an implicit surface that could be processed, e.g., for visualization.
- Last year we also had a paper on descriptor learning for fluid flow. Still worth a mention 🙂
Sebastian Barschkis, CS-student at TUM, has just presented his latest progress regarding the integration of our mantaflow solver into Blender. You can check out his full presentation including insights about code structure as well as using the solver here:
There are admittedly still some rough edges, but mantaflow should give Blender users a significant step forward in terms of visual quality and performance.
The full source code of our recent SIGGRAPH paper coupling fluid simulations with convolutional neural-networks is finally online now! You can check it out here:
The code uses our mantaflow framework for the Navier-Stokes simulation part, and Google’s tensorflow framework for the deep learning portion. You can find a short introduction / how-to on the github page above. If you give it a try, let us know how it works!
The corresponding paper is this one
“Data-Driven Synthesis of Smoke Flows with CNN-based Feature Descriptors”, by Rachel Chu and Nils Thuerey.
We just completed the Ferienakademie 2017 course on Neural Networks for Fluid Simulations. In total, 17 participants prepared short presentations about research highlights, and then worked hard on an own implementation of various generative neural networks for two dimensional fluid flow. This course was jointly organized by Nils Thuerey (TUM), Michael Engel (FAU), and Miriam Mehl (Universitaet Stuttgart).
As part of the course, the participants were able to gain first-hand experience with deep learning algorithms, and explore connecting these algorithms with problems from the area of physical simulations. We used the tensorflow framework (https://www.tensorflow.org) for the deep learning part, and our own mantaflow solver (http://www.mantaflow.com) for the Navier-Stokes simulations.
The Ferienakademie (https://www.ferienakademie.de) is a long-established institution at TUM. It takes place every year in the Sarntal in “Alto Adige”, i.e., South Tyrol, and this year actually was its 34st instance. Highly recommended for motivated students that are interested in going beyond the standard curricula of a university, and who have a certain affinity for hiking, of course 🙂
Here’s a photo of this year’s participants – after all the hard work & hiking were done…
You can now find a second mantaflow-tensorflow tutorial on our webpage. It explains the example0 code of mantaflow, which represents an example that’s as simple as possible: a very simple mantaflow scene that generates some flow data, and a simple tensorflow setup that trains a simple neural network with this data.
Despite its simplicity, it contains all the important parts: data wrangling, network training, and result generation. Also, it’ll demonstrate how much you can get out of a fifty-dimensional latent space from a simple NN auto encoder.
Curious how these weird wisps of smoke were created? Check out the full tutorial here..
Sebastian Barschkis, a TUM computer science student, just successfully finished his Google summer-of-code project, pushing the integration of our fluid solver mantaflow into the open-source animation package blender (https://www.blender.org) a step further. In his project, the main goals were a secondary particle extension (for splash & foam particles of liquids) and the integration of our primal-dual guiding optimization (see the full paper here). Hopefully, that moves us yet another step closer to releasing mantaflow as part of an official blender release, 2.8 hopefully!
More detailed documentation and info can be found on Sebastian’s wiki page:
And the official summer of code project page can be found here:
We’ve just posted a first introduction in a series on how to couple mantaflow and fluid sims with tensorflow and deep learning algorithms.
The latest release (v0.11) of mantaflow comes with a set of data-transfer functions to exchange data between the two frameworks, and provides three examples with varying complexity. The page below gives an introduction to mantaflow-tensorflow coupling, and an overview of the data transfer functions. A more in-depth discussions of the three coupling examples will follow in the next weeks.
Our recent research papers were just presented in Los Angeles at the ACM SIGGRAPH conference, and ACM has just posted the videos of two of the presentations online. You can view them under the following link (starting at 28:00):
The two papers presented here focus on data-driven fluid simulations and simulation algorithms powered by deep learning. The latter proposes a method to match pre-computed space-time patches of flow data using a convolutional neural network. This network can robustly establish correspondences between new simulations and the pre-computed entries in the repository. In particular, the network learns to take into account the effects of numerical viscosity, which are otherwise extremely difficult to predict. This is a good example of how deep learning techniques can extend and improve traditonal techniques for numerical simulations.
The second method targets the complex behavior of liquid simulations. It employs 5D optial flow solves to robustly register the potentially very different space-time surfaces of liquid simulations. This registration can afterwards be used to smoothly interpolate between different simulations without loosing too much detail. The talk also discusses how this approach can be extended by deep learning: we use a convolutional neural network to generate a second deformation to take into account the full behavior of the liquid space region under consideration. The trained network is fast enough to be executed interactively on a regular mobile phone. Once the space-time surface is deformed with the OF and network deformations, it can be rendered from arbitrary viewpoints very efficiently.
Below you can find the full abstracts of all three papers, and links to the corresponding pages.
We present a novel deep learning algorithm to synthesize high resolution flow simulations with reusable repositories of space-time flow data. In our work, we employ a descriptor learning approach to encode the similarity between fluid regions with differences in resolution and numerical viscosity. We use convolutional neural networks to generate the descriptors from fluid data such as smoke density and flow velocity. At the same time, we present a deformation limiting patch advection method which allows us to robustly track deformable fluid regions. With the help of this patch advection, we generate stable space-time data sets from detailed fluids for our repositories. We can then use our learned descriptors to quickly localize a suitable data set when running a new simulation. This makes our approach very efficient, and resolution independent. We will demonstrate with several examples that our method yields volumes with very high effective resolutions, and non-dissipative small scale details that naturally integrate into the motions of the underlying flow.
We present a novel method to interpolate smoke and liquid simulations in order to perform data-driven fluid simulations. Our approach calculates a dense space-time deformation using grid-based signed-distance functions of the inputs. A key advantage of this implicit Eulerian representation is that it allows us to use powerful techniques from the optical flow area. We employ a five-dimensional optical flow solve. In combination with a projection algorithm, and residual iterations, we achieve a robust matching of the inputs. Once the match is computed, arbitrary in between variants can be created very efficiently. To concatenate multiple long-range deformations, we propose a novel alignment technique. Our approach has numerous advantages, including automatic matches without user input, volumetric deformations that can be applied to details around the surface, and the inherent handling of topology changes. As a result, we can interpolate swirling smoke clouds, and splashing liquid simulations. We can even match and interpolate phenomena with fundamentally different physics: a drop of liquid, and a blob of heavy smoke.
Liquids exhibit complex non-linear behavior under changing simulation conditions such as user interactions. We propose a method to map this complex behavior over a parameter range onto reduced representation based on space-time deformations. In order to represent the complexity of the full space of inputs, we leverage the power of generative neural networks to learn a reduced representation. We introduce a novel deformation-aware loss function, which enables optimization in the highly non-linear space of multiple deformations. To demonstrate the effectiveness of our approach, we showcase the method with several complex examples in two and four dimensions. Our representation makes it possible to generate implicit surfaces of liquids very efficiently, which makes it possible to display the scene from any angle, and to add secondary effects such as particle systems. We have implemented a mobile application for our full output pipeline to demonstrate that real-time interaction is possible with our approach.
The new version of mantaflow is online! mantaflow is our open-source framework targeted at fluid simulation research in Computer Graphics. We’re especially working on making mantaflow a convenient platform for fluids and deep learning. The new release contains a first set of tools and examples to get started. We will post more in-depths tutorials here in the coming weeks.
In addition, the new release supports surface tension forces, e.g., for simulating small droplets, and a viscosity solve for thicker materials and physically more accurate simulations. The fast multigrid solver is another highlight. It allows for efficient calculations of large-scale effects.
Here’s an incomplete feature list:
- multigrid-preconditioned solver
- Eulerian simulation using MAC Grids, PCG pressure solver and MacCormack advection
- Flexible particle systems
- FLIP simulations for liquids
- Surface mesh tracking
- Free surface simulations with levelsets, fast marching
- Wavelet and surface turbulence
- K-epsilon turbulence modeling and synthesis
- Maya and Blender export for rendering
- tensorflow couping via numpy arrays
Btw., mantaflow has been used in numerous publications! Among others:
- Data-Driven Synthesis of Smoke Flows with Convolutional-Neural-Network-based Feature Descriptors, SIGGRAPH 2017
- Interpolations of Smoke and Liquid Simulation, Transactions on Graphics 2016
- Narrow-band FLIP, Eurographics 2016
- Surface Turbulence for Particle-Based Liquid Simulations, SIGGRAPH Asia 2015
- Connecting Forward and Inverse Problems in Fluids, SIGGRAPH 2014
- Liquid Surface Tracking with Error Compensation, SIGGRAPH 2013
- Turbulent fluids: Course, SIGGRAPH 2013
- Lagrangian Vortex Sheets for Fluid Animation, SIGGRAPH 2012
- Physics-Inspired Topology Changes for Thin Fluid Features, SIGGRAPH 2010
- A Multiscale Approach to Mesh-based Surface Tension flows, SIGGRAPH 2010
Fig. 1: A few images of a controlled smoke simulation using the PD-guiding feature of mantaflow.
We just added two new publications. One of them focuses on robust decompositions of fluid flows into vortex filaments, and it will be presented at SCA 2017 in Los Angeles soon. You can find all detail on the accompanying page, and you can see a preview below.
The other work targets generative neural networks (full details here). In this case the network actually generates a dense space-time deformation field to capture spaces of liquid behavior. We have also created a proof-of-concept Android app (Lukas Prandtl was the main driving force behind this one), which you can download to try it out yourself. The app evaluates the full trained neural network every time you tap to synthesize the corresponding deformation field. This is especially tough in our setting, as the network employs 4D deconvolution layers that need to be evaluated on the mobile device.