Our recent research papers were just presented in Los Angeles at the ACM SIGGRAPH conference, and ACM has just posted the videos of two of the presentations online. You can view them under the following link (starting at 28:00):
The two papers presented here focus on data-driven fluid simulations and simulation algorithms powered by deep learning. The latter proposes a method to match pre-computed space-time patches of flow data using a convolutional neural network. This network can robustly establish correspondences between new simulations and the pre-computed entries in the repository. In particular, the network learns to take into account the effects of numerical viscosity, which are otherwise extremely difficult to predict. This is a good example of how deep learning techniques can extend and improve traditonal techniques for numerical simulations.
The second method targets the complex behavior of liquid simulations. It employs 5D optial flow solves to robustly register the potentially very different space-time surfaces of liquid simulations. This registration can afterwards be used to smoothly interpolate between different simulations without loosing too much detail. The talk also discusses how this approach can be extended by deep learning: we use a convolutional neural network to generate a second deformation to take into account the full behavior of the liquid space region under consideration. The trained network is fast enough to be executed interactively on a regular mobile phone. Once the space-time surface is deformed with the OF and network deformations, it can be rendered from arbitrary viewpoints very efficiently.
Below you can find the full abstracts of all three papers, and links to the corresponding pages.
We present a novel deep learning algorithm to synthesize high resolution flow simulations with reusable repositories of space-time flow data. In our work, we employ a descriptor learning approach to encode the similarity between fluid regions with differences in resolution and numerical viscosity. We use convolutional neural networks to generate the descriptors from fluid data such as smoke density and flow velocity. At the same time, we present a deformation limiting patch advection method which allows us to robustly track deformable fluid regions. With the help of this patch advection, we generate stable space-time data sets from detailed fluids for our repositories. We can then use our learned descriptors to quickly localize a suitable data set when running a new simulation. This makes our approach very efficient, and resolution independent. We will demonstrate with several examples that our method yields volumes with very high effective resolutions, and non-dissipative small scale details that naturally integrate into the motions of the underlying flow.
We present a novel method to interpolate smoke and liquid simulations in order to perform data-driven fluid simulations. Our approach calculates a dense space-time deformation using grid-based signed-distance functions of the inputs. A key advantage of this implicit Eulerian representation is that it allows us to use powerful techniques from the optical flow area. We employ a five-dimensional optical flow solve. In combination with a projection algorithm, and residual iterations, we achieve a robust matching of the inputs. Once the match is computed, arbitrary in between variants can be created very efficiently. To concatenate multiple long-range deformations, we propose a novel alignment technique. Our approach has numerous advantages, including automatic matches without user input, volumetric deformations that can be applied to details around the surface, and the inherent handling of topology changes. As a result, we can interpolate swirling smoke clouds, and splashing liquid simulations. We can even match and interpolate phenomena with fundamentally different physics: a drop of liquid, and a blob of heavy smoke.
Liquids exhibit complex non-linear behavior under changing simulation conditions such as user interactions. We propose a method to map this complex behavior over a parameter range onto reduced representation based on space-time deformations. In order to represent the complexity of the full space of inputs, we leverage the power of generative neural networks to learn a reduced representation. We introduce a novel deformation-aware loss function, which enables optimization in the highly non-linear space of multiple deformations. To demonstrate the effectiveness of our approach, we showcase the method with several complex examples in two and four dimensions. Our representation makes it possible to generate implicit surfaces of liquids very efficiently, which makes it possible to display the scene from any angle, and to add secondary effects such as particle systems. We have implemented a mobile application for our full output pipeline to demonstrate that real-time interaction is possible with our approach.