We will be presenting our recent works on physics-based deep learning for fluid flow at the NIPS 2018 workshop on “Modeling the Physical World: Learning, Perception, and Control“, organized by Jiajun Wu, Kelsey Allen, Kevin Smith, Jessica Hamrick, Emmanuel Dupoux, Marc Toussaint, and Joshua Tenenbaum.
NIPS Conference: https://nips.cc
NIPS 2018 Workshop “Modeling the Physical World: Learning, Perception, and Control”: https://nips.cc/Conferences/2018/Schedule?showEvent=10931
Workshop homepage: http://phys2018.csail.mit.edu/submission.html
In particular we will discuss our works on:
- Temporally Coherent, Volumetric GAN for Super-resolution Fluid Flow
- Latent-space Physics: Towards Learning the Temporal Evolution of Fluid Flow
- Coupled Fluid Density and Motion from Single Views
Detailed abstracts:
Latent-space Physics: Towards Learning the Temporal Evolution of Fluid Flow: Our work explores methods for the data-driven inference of temporal evolutions of physical functions with deep learning techniques. More specifically, we target fluid flow problems, and we propose a novel network architecture to predict the changes of the pressure field over time. The central challenge in this context is the high dimensionality of Eulerian space-time data sets. Key for arriving at a feasible algorithm is a technique for dimensionality reduction based on convolutional neural networks, as well as a special architecture for temporal prediction. We demonstrate that dense 3D+time functions of physics system can be predicted with neural networks, and we arrive at a neural-network based simulation algorithm with practical speed-ups. We demonstrate the capabilities of our method with a series of complex liquid simulations, and with a set of single-phase simulations. Our method predicts pressure fields very efficiently. It is more than two orders of magnitudes faster than a regular solver. Additionally, we present and discuss a series of detailed evaluations for the different components of our algorithm.
Temporally Coherent, Volumetric GAN for Super-resolution Fluid Flow: We propose a temporally coherent generative model addressing the super-resolution problem for fluid flows. Our work represents the first approach to synthesize four-dimensional physics fields with neural networks. Based on a conditional generative adversarial network that is designed for the inference of three-dimensional volumetric data, our model generates consistent and detailed results by using a novel temporal discriminator, in addition to the commonly used spatial one. Our experiments show that the generator is able to infer more realistic high-resolution details by using additional physical quantities, such as low-resolution velocities or vorticities. Besides improvements in the training process and in the generated outputs, these inputs offer means for artistic control as well. We additionally employ a physics-aware data augmentation step, which is crucial to avoid overfitting and to reduce memory requirements. In this way, our network learns to generate advected quantities with highly detailed, realistic, and temporally coherent features. Our method works instantaneously, using only a single time-step of low-resolution fluid data. We demonstrate the abilities of our method using a variety of complex inputs and applications in two and three dimensions.
Coupled Fluid Density and Motion from Single Views: We present a novel method to reconstruct a fluid’s 3D density and motion based on just a single sequence of images. This is rendered possible by using powerful physical priors for this strongly under-determined problem. More specifically, we propose a novel strategy to infer density updates strongly coupled to previous and current estimates of the flow motion. Additionally, we employ an accurate discretization and depth-based regularizers to compute stable solutions. Using only one view for the reconstruction reduces the complexity of the capturing setup drastically and could even allow for online video databases or smart-phone videos as inputs. The reconstructed 3D velocity can then be flexibly utilized, e.g., for re-simulation, domain modification or guiding purposes. We will demonstrate the capacity of our method with a series of synthetic test cases and the reconstruction of real smoke plumes captured with a Raspberry Pi camera.