Diffusion Models and Score Matching for Physics Simulations

We’re happy to announce that our paper titled “Score Matching via Differentiable Physics” on employing diffusion models for physical systems is available on arXiv now: https://arxiv.org/abs/2301.10250.

The key idea is to use a physics operator (a simulator) as “drift” term of an SDE. Among others, we give a derivation that connects the learned corrections to the score of the underlying data distribution. Here’s a visual overview:

The solutions produced by our SMDP method are (at least) on-par with regular learned methods, while providing the fundamental advantage that they allow sampling the posterior. Here’s an example from a heat equation case:

Here’s the full paper abstract:

Diffusion models based on stochastic differential equations (SDEs) gradually perturb a data distribution p(x) over time by adding noise to it. A neural network is trained to approximate the score ∇xlogpt(x) at time t, which can be used to reverse the corruption process. In this paper, we focus on learning the score field that is associated with the time evolution according to a physics operator in the presence of natural non-deterministic physical processes like diffusion. A decisive difference to previous methods is that the SDE underlying our approach transforms the state of a physical system to another state at a later time. For that purpose, we replace the drift of the underlying SDE formulation with a differentiable simulator or a neural network approximation of the physics. We propose different training strategies based on the so-called probability flow ODE to fit a training set of simulation trajectories and discuss their relation to the score matching objective. For inference, we sample plausible trajectories that evolve towards a given end state using the reverse-time SDE and demonstrate the competitiveness of our approach for different challenging inverse problems.

Video for “Guaranteed Conservation of Momentum for Learning Particle-based Fluid Dynamics”

The video for our NeurIPS’22 paper of the same name is finally online, enjoy:

As mentioned before, the source code can be found on github: https://github.com/tum-pbs/DMCF , the full paper abstract is:

We present a novel method for guaranteeing linear momentum in learned physics simulations. Unlike existing methods, we enforce conservation of momentum with a hard constraint, which we realize via antisymmetrical continuous convolutional layers. We combine these strict constraints with a hierarchical network architecture, a carefully constructed resampling scheme, and a training approach for temporal coherence. In combination, the proposed method allows us to increase the physical accuracy of the learned simulator substantially. In addition, the induced physical bias leads to significantly better generalization performance and makes our method more reliable in unseen test cases. We evaluate our method on a range of different, challenging fluid scenarios. Among others, we demonstrate that our approach generalizes to new scenarios with up to one million particles. Our results show that the proposed algorithm can learn complex dynamics while outperforming existing approaches in generalization and training performance.

Untangled Layered Neural Fields for Mix-and-Match Virtual Try-On at NeurIPS

We’re happy to report that our paper “ULNeF: Untangled Layered Neural Fields for Mix-and-Match Virtual Try-On” will be presented at NeurIPS next week. This approach solves the problem of multi-layer cloth collision with a learned untangling operator.

The project website is https://mslab.es/projects/ULNeF/ , and the corresponding video is highly recommended https://www.youtube.com/watch?v=elpg6LjC61k&feature=emb_logo.

Full abstract: Recent advances in neural models have shown great results for virtual try-on (VTO) problems, where a 3D representation of a garment is deformed to fit a target body shape. However, current solutions are limited to a single garment layer, and cannot address the combinatorial complexity of mixing different garments. Motivated by this limitation, we investigate the use of neural fields for mix-and-match VTO, and identify and solve a fundamental challenge that existing neural-field methods cannot address: the interaction between layered neural fields. To this end, we propose a neural model that untangles layered neural fields to represent collision-free garment surfaces. The key ingredient is a neural untangling projection operator that works directly on the layered neural fields, not on explicit surface representations. Algorithms to resolve object-object interaction are inherently limited by the use of explicit geometric representations, and we show how methods that work directly on neural implicit representations could bring a change of paradigm and open the door to radically different approaches.

“Learning Similarity Metrics for Volumetric Simulations with Multiscale CNNs” at AAAI Conference

We’re happy to announce that our paper “Learning Similarity Metrics for Volumetric Simulations with Multiscale CNNs” was just accepted at the AAAI Conference on Artificial Intelligence.

It targets the similarity assessment of complex simulated datasets via an entropy-based CNN for 3D multi-channel data. The preprint can be found here https://arxiv.org/abs/2202.04109, or try it out yourself via a pretrained model at https://github.com/tum-pbs/VOLSIM.

Full abstract: Simulations that produce three-dimensional data are ubiquitous in science, ranging from fluid flows to plasma physics. We propose a similarity model based on entropy, which allows for the creation of physically meaningful ground truth distances for the similarity assessment of scalar and vectorial data, produced from transport and motion-based simulations. Utilizing two data acquisition methods derived from this model, we create collections of fields from numerical PDE solvers and existing simulation data repositories, and highlight the importance of an appropriate data distribution for an effective training process. Furthermore, a multiscale CNN architecture that computes a volumetric similarity metric (VolSiM) is proposed. To the best of our knowledge this is the first learning method inherently designed to address the challenges arising for the similarity assessment of high-dimensional simulation data. Additionally, the tradeoff between a large batch size and an accurate correlation computation for correlation-based loss functions is investigated, and the metric’s invariance with respect to rotation and scale operations is analyzed. Finally, the robustness and generalization of VolSiM is evaluated on a large range of test data, as well as a particularly challenging turbulence case study, that is close to potential real-world applications.

“Reviving Autoencoder Pretraining” published in Neural Computing and Applications Journal

We’re happy to report that our paper “Reviving Autoencoder Pretraining” was finally published in Neural Computing and Applications Journal, and is available online now at https://rdcu.be/cYmbd.

In short, the paper targets learning features via a forward-backward pass that was inspired by our ping-pong loss from our TecoGAN paper. There we used it to self-supervise video sequences in terms of forward and backward dynamics, while the new paper uses it to train networks that are “as-invertible-as-possible“. Orginally, we tried to coin the term “racecar” loss, the word “racecar” being a nice palindrome to highlight the bidirectional nature.

Abstract: The pressing need for pretraining algorithms has been diminished by numerous advances in terms of regularization, architectures, and optimizers. Despite this trend, we re-visit the classic idea of unsupervised autoencoder pretraining and propose a modified variant that relies on a full reverse pass trained in conjunction with a given training task. This yields networks that are {\em as-invertible-as-possible}, and share mutual information across all constrained layers. We additionally establish links between singular value decomposition and pretraining and show how it can be leveraged for gaining insights about the learned structures. Most importantly, we demonstrate that our approach yields an improved performance for a wide variety of relevant learning and transfer tasks ranging from fully connected networks over residual neural networks to generative adversarial networks. Our results demonstrate that unsupervised pretraining has not lost its practical relevance in today’s deep learning environment.

Further info can be found here.

“Scale-invariant Learning by Physics Inversion” at NeurIPS 2022

Our paper “Scale-invariant Learning by Physics Inversion” was accepted at NeurIPS and is available now at https://arxiv.org/abs/2109.15048.

The key insight is that custom inverse solvers (SIPs) provide a powerful tool that outperforms differentiable programming and supervised training, here’s a preview, x^* the reference:

Even though we only use them for the physics part, SIP-training helps to reach levels of accuracy that are unattainable with other learning methods (the NN is still trained via simple backprop). The main takeaway is: consider replacing your regular backprop gradients through the solver with something better (e.g. an inverse)! The source code is available at: https://github.com/tum-pbs/SIP

Full paper abstract: Solving inverse problems, such as parameter estimation and optimal control, is a vital part of science. Many experiments repeatedly collect data and rely on machine learning algorithms to quickly infer solutions to the associated inverse problems. We find that state-of-the-art training techniques are not well-suited to many problems that involve physical processes. The highly nonlinear behavior, common in physical processes, results in strongly varying gradients that lead first-order optimizers like SGD or Adam to compute suboptimal optimization directions. We propose a novel hybrid training approach that combines higher-order optimization methods with machine learning techniques. We take updates from a scale-invariant inverse problem solver and embed them into the gradient-descent-based learning pipeline, replacing the regular gradient of the physical process. We demonstrate the capabilities of our method on a variety of canonical physical systems, showing that it yields significant improvements on a wide range of optimization and learning problems.

“Guaranteed Conservation of Momentum for Learning Particle-based Fluid Dynamics” at NeurIPS 2022

We’re happy to report that our paper “Guaranteed Conservation of Momentum for Learning Particle-based Fluid Dynamics” has been accepted at NeurIPS and is available now at https://arxiv.org/abs/2210.06036.

Here’s a preview of a generalization test where we apply our model (without changes, i.e. same time step as before) to a scene with over 1 million particles:

Interestingly, the convolutional architecture leads to lean & efficient models that outperform architectures such as graph-nets, the positional inductive bias turns out to be important for physics simulations

Source code and data sets are already available under “Deep Momentum Conserving Fluids” on github: https://github.com/tum-pbs/DMCF

Full paper abstract: We present a novel method for guaranteeing linear momentum in learned physics simulations. Unlike existing methods, we enforce conservation of momentum with a hard constraint, which we realize via antisymmetrical continuous convolutional layers. We combine these strict constraints with a hierarchical network architecture, a carefully constructed resampling scheme, and a training approach for temporal coherence. In combination, the proposed method allows us to increase the physical accuracy of the learned simulator substantially. In addition, the induced physical bias leads to significantly better generalization performance and makes our method more reliable in unseen test cases. We evaluate our method on a range of different, challenging fluid scenarios. Among others, we demonstrate that our approach generalizes to new scenarios with up to one million particles. Our results show that the proposed algorithm can learn complex dynamics while outperforming existing approaches in generalization and training performance.

Realistic galaxy images and improved robustness in machine learning tasks from generative modelling

We’re happy to highlight the recent work of Benjamin which just got officially published in the Monthly Notices of the Royal Astronomical Society (MNRAS). The preprint version can be found on arxiv at https://arxiv.org/abs/2203.11956. This work demonstrates the usefulness of deep learning-based generative models in the context of astronomy applications. We demonstrate a denoising task, and and this model could be a very interesting building block for other tasks such as finding gravitational lenses.

The full source code for training and evaluating the model and the variants shown in the paper can be found on GitHub: https://github.com/Akanota/galaxies-metrics-denoising

We examine the capability of generative models to produce realistic galaxy images. We also show that mixing generated data with the original data improves the robustness in downstream machine learning tasks. We demonstrate that by supplementing the training data set with generated data, it is possible to significantly improve the robustness against domain-shifts and out-of-distribution data.

Unrolling Training for Differentiable Physics vs Supervised

When explaining differentiable physics training, the following question regularly comes up: “So, how much does training with this differentiable physics simulator really improve things? E.g., compared to unrolling a supervised setup?” In short, a lot! Here’s the answer for a nice turbulent mixing layer case (all from Björn’s paper, available as a preprint at https://arxiv.org/abs/2202.06988).

First, we can qualitatively show the differences. Here’s the DNS reference version compared to a full differentiable physics training with 10 steps, and the “supervised unrollment” version. The latter has the back propagation through the solver disabled, but is identical otherwise :

It’s quite obvious that the long-term feedback through the solver is missing for the supervised version. It produces noticeable artifacts and oscillations. Note that both versions still use a powerful set of carefully selected turbulence losses, as described in the paper. Otherwise the difference would be even larger.

We can also quantify these differences in terms of the resolved Reynolds stresses:

The ground truth targets are shown as orange dots here. There’s a clear gap that the differentiable physics training can close towards the reference.

And here for the resolved turbulent kinetic energy. On the positive side of the graph, the DP version aligns better with the ground truth:

All of these results are from a very late time step: after evaluating 1024 time steps with the hybrid solver, i.e., alternating evaluations of the PISO solver and the trained neural network turbulence model. With a carefully chosen learning setup, even the supervised model stays stable for this time span, but the model nonetheless does significantly better when it receives the long-term feedback in the form of gradients from the differentiable flow solver.

To summarize, the unrollment of time steps with differentiable physics training gives significant gains for the accuracy of neural network powered solvers. Especially when such a NN-based turbulence model is to be trained once, and then deployed and evaluated many times in an application, the one time cost to train the model together with the solver will amortize very quickly.

Physics-based Deep Learning Book v0.2

We’re happy to publish v0.2 of our “Physics-Based Deep Learning” book #PBDL. The main goal is still a thorough hands-on introduction for physics simulations with deep learning, and the new version contains a large new part on improved learning methods. The main document is available at https://www.physicsbaseddeeplearning.org/ , or as PDF at https://arxiv.org/abs/2109.05237.

Among others, we’re explaining the “scale-invariant physics” training which includes higher-order information via inverse simulators. In naturally comes with code examples, e.g., this inverse heat problem:

Also, half-inverse gradients are explained in detail now. They jointly invert physics and neural network to get optimal updates for a whole mini-batch, here’s an example implementation that runs on the spot in colab:

… and of course we fixed numerous typos throughout all chapters, cleaned up the code, and clarified explanations. Please let us know if you find any 🙂 ! This text will also serve as the basis and script for our upcoming Advanced Deep Learning for Physics (IN2298) or shortened ADL4Physics course.

ICLR 2022 Spotlight Paper “Half-inverse Gradients”

We’re happy to advertise our ICLR’22 spotlight paper on half-inverse gradients (HIGs) for physics problems in deep learning. We propose a new method that bridges the gap between “classical” optimizers and machine learning methods. We’re showing its advantages and behavior for a wide range of physical problems, from a simple non-linear oscillator, over a diffusion PDE, to a quantum dipole control problem. So – no Navier-Stokes, yet – but that’s definitely an exciting outlook 😀

arXiv preprint: https://arxiv.org/abs/2203.10131

Code repository: https://github.com/tum-pbs/half-inverse-gradients

Abstract: Recent works in deep learning have shown that integrating differentiable physics simulators into the training process can greatly improve the quality of results. Although this combination represents a more complex optimization task than supervised neural network training, the same gradient-based optimizers are typically employed to minimize the loss function. However, the integrated physics solvers have a profound effect on the gradient flow as manipulating scales in magnitude and direction is an inherent property of many physical processes. Consequently, the gradient flow is often highly unbalanced and creates an environment in which existing gradient-based optimizers perform poorly. In this work, we analyze the characteristics of both physical and neural network optimizations to derive a new method that does not suffer from this phenomenon. Our method is based on a half-inversion of the Jacobian and combines principles of both classical network and physics optimizers to solve the combined optimization task. Compared to state-of-the-art neural network optimizers, our method converges more quickly and yields better solutions, which we demonstrate on three complex learning problems involving nonlinear oscillators, the Schroedinger equation and the Poisson problem.

Learned Turbulence Modeling with Differentiable Fluid Solvers Paper Preview

Below you can preview our newest paper on learned turbulence modeling with differentiable solvers. The learned model closely reproduces the turbulent statistics of the DNS reference, and matches it’s accuracy while being more than an order of magnitude faster. The training is performed jointly with a classic, second-order PISO solver that is made differentiable via phiflow.

Preprint available on arXiv: https://arxiv.org/abs/2202.06988


This is all based on a validated second-order-in-time PISO solver, that’s integrated into neural network training with the “differentiable physics” approach.

Abstract: In this paper, we train turbulence models based on convolutional neural networks. These learned turbulence models improve under-resolved low resolution solutions to the incompressible Navier-Stokes equations at simulation time. Our method involves the development of a differentiable numerical solver that supports the propagation of optimisation gradients through multiple solver steps. We showcase the significance of this property by demonstrating the superior stability and accuracy of those models that featured a higher number of unrolled steps during training. This approach is applied to three two-dimensional turbulence flow scenarios, a homogeneous decaying turbulence case, a temporally evolving mixing layer and a spatially evolving mixing layer. Our method achieves significant improvements of long-term a-posteriori statistics when compared to no-model simulations, without requiring these statistics to be directly included in the learning targets. At inference time, our proposed method also gains substantial performance improvements over similarly accurate, purely numerical methods.

“Learning Similarity Metrics for Volumetric Simulations with Multiscale CNNs” on arXiv

Our paper on volumetric similarity metrics for simulations (VolSim) is online now at https://github.com/tum-pbs/VOLSIM and https://arxiv.org/abs/2202.04109 , proposing an entropy-based similarity model, among others. Enjoy!

Abstract: Simulations that produce three-dimensional data are ubiquitous in science, ranging from fluid flows to plasma physics. We propose a similarity model based on entropy, which allows for the creation of physically meaningful ground truth distances for the similarity assessment of scalar and vectorial data, produced from transport and motion-based simulations. Utilizing two data acquisition methods derived from this model, we create collections of fields from numerical PDE solvers and existing simulation data repositories, and highlight the importance of an appropriate data distribution for an effective training process. Furthermore, a multiscale CNN architecture that computes a volumetric similarity metric (VolSiM) is proposed. To the best of our knowledge this is the first learning method inherently designed to address the challenges arising for the similarity assessment of high-dimensional simulation data. Additionally, the tradeoff between a large batch size and an accurate correlation computation for correlation-based loss functions is investigated, and the metric’s invariance with respect to rotation and scale operations is analyzed. Finally, the robustness and generalization of VolSiM is evaluated on a large range of test data, as well as a particularly challenging turbulence case study, that is close to potential real-world applications.

Talks about Differentiable Physics Simulations and Deep Learning for PDEs coming up

Nils (Thuerey) will give several talks on talks about differentiable simulations and deep learning for physics problems and PDEs in the next weeks. Here’s a preview, we hope to see some of our website visitors during these talks! Here’s a preview for some of them….

Abstract: In this talk I will focus on the possibilities that arise from recent advances in the area of deep learning for physical simulations. In this context, especially the Navier-Stokes equations represent an interesting and challenging advection-diffusion PDE that poses a variety of challenges for deep learning methods.

In particular, I will focus on differentiable physics solvers from the larger field of differentiable programming. Differentiable solvers are very powerful tools to integrate into deep learning processes. The existing numerical methods for efficient solvers can be leveraged within learning tasks to provide crucial information in the form of reliable gradients to update the weights of a neural networks. Interestingly, it turns out to be beneficial to combine supervised and physics-based approaches. The former poses a much simpler learning task by providing explicit reference data that is typically pre-computed. Physics-based learning on the other hand can provide gradients for a larger space of states that are only encountered at training time. Here, differentiable solvers are particularly powerful to, e.g., provide neural networks with feedback about how inferred solutions influence the long-term behavior of a physical model. 

I will demonstrate this concept with several examples from learning to reduce numerical errors, over long-term planning and control, to generalization. I will conclude by discussing current limitations and by giving an outlook about promising future directions.

Paper Previews: Improved Physical Learning and New Directions for Differentiable Simulations

We have a variety of highly interesting research papers in the works. Obviously, these are work in progress, but nonetheless warrant a quick preview at this time of the year. Here are two examples which can be found on arXiv by now. The first one targets the connection of classical optimization schemes and deep learning methods for physical systems (via what we’ve dubbed “physical gradients”).

The second paper targets “incomplete” PDE solvers, i.e., solvers where only a part of the full PDE is known and available at training time, and the remainder is only specified via the training data. If you’ve worked with differentiable solvers, you can probably imagine that a neural network is able to learn the missing part of the PDE. However, it is nonetheless nice to have concrete results demonstrating this concept for several interesting and non-trivial reacting flow cases.

Φ-Flow 2.0 is now live!

We’re happy to announce version 2.0 of Φ-Flow. It’s the latest version of our framework for differentiable physical simulations and deep learning: https://github.com/tum-pbs/PhiFlow

Version 2 contains numerous new features: named dimensions (no more reshaping!), support for numpy,TF,pytorch & JAX backends, with cross-platform jit compilation…

Documentation: https://tum-pbs.github.io/PhiFlow/

Philipp Holl, the main developer of phiflow, has also started recording his series of tutorial videos. Here’s a first one about basic math functions: https://youtu.be/4nYwL8ZZDK8

And a second one about grid handling, resampling and named dimensions: https://youtu.be/YRi_c0v3HKs

You can start experimenting right away using the Φ-Flow playground on colab: https://colab.research.google.com/drive/1zBlQbmNguRt-Vt332YvdTqlV4DBcus2S#offline=true&sandboxMode=true

Physics-based Deep Learning book (PBDL) – Additional Materials

We have also just released the PDF Version of our Physics-based Deep Learning book. It can come in handy if you want to read it offline in one go: https://arxiv.org/pdf/2109.05237.pdf

Looking ahead, we’re of course also planning a printed version with extra-thick, glossy paper and a large font size so that typing up the Jupyter notebooks is extra convenient. (No, of course not 😄 – rather, please don’t print the PDF!)

In addition, there’s also an introductory video that summarizes the goals of the PBDL book, and runs through some of the highlights: https://youtu.be/SU-OILSmR1M

Introducing the Physics-based Deep Learning book (PBDL)

We’re very happy to publish a first version of the digital “Physics-based Deep Learning” book: https://www.physicsbaseddeeplearning.org

Its central goal is to give a thorough, hands-on introduction to deep learning for physical systems, from simple physical losses to full hybrid solvers. Best of all: the majority of the topics come with Jupyter notebooks that can be run on the spot!

To mention a few highlights: the book contains a notebook to train hybrid fluid flow (Navier-Stokes) solvers via differentiable physics to reduce numerical errors. Try it out:
https://colab.research.google.com/github/tum-pbs/pbdl-book/blob/main/diffphys-code-sol.ipynb

It also has example code to train a Bayesian Neural Network for RANS flow predictions around airfoils that yield uncertainty estimates. You can run the code right away here:
https://colab.research.google.com/github/tum-pbs/pbdl-book/blob/main/bayesian-code.ipynb

And a notebook to compare proximal policy-based reinforcement learning with physics-based learning for controlling PDEs (spoiler: the physics-aware version does better in the end). Give it a try:
https://colab.research.google.com/github/tum-pbs/pbdl-book/blob/main/reinflearn-code.ipynb

Here’s the full synopsis of the book:

The PBDL book contains a practical and comprehensive introduction of everything related to deep learning in the context of physical simulations. As much as possible, all topics come with hands-on code examples in the form of Jupyter notebooks to quickly get started. Beyond standard supervised learning from data, we’ll look at physical loss constraints, more tightly coupled learning algorithms with differentiable simulations, as well as reinforcement learning and uncertainty modeling. We live in exciting times: these methods have a huge potential to fundamentally change what we can achieve with simulations.

The key aspects that we will address in the following are:

  • explain how to use deep learning techniques to solve PDE problems,
  • how to combine them with existing knowledge of physics,
  • without discarding our knowledge about numerical methods.

The focus of this book lies on:

  • Field-based simulations (no Lagrangian methods)
  • Combinations with deep learning (plenty of other interesting ML techniques, but not here)
  • Experiments as outlook (i.e., replace synthetic data with real-world observations)

The name of this book, Physics-Based Deep Learning, denotes combinations of physical modeling and numerical simulations with methods based on artificial neural networks. The general direction of Physics-Based Deep Learning represents a very active, quickly growing and exciting field of research.

The aim is to build on all the powerful numerical techniques that we have at our disposal, and use them wherever we can. As such, a central goal of this book is to reconcile the data-centered viewpoint with physical simulations.

The resulting methods have a huge potential to improve what can be done with numerical methods: in scenarios where a solver targets cases from a certain well-defined problem domain repeatedly, it can for instance make a lot of sense to once invest significant resources to train a neural network that supports the repeated solves. Based on the domain-specific specialization of this network, such a hybrid could vastly outperform traditional, generic solvers.

High-accuracy transonic RANS flow predictions with deep neural networks: preprint online now

Our paper on high-accuracy airfoil flow predictions (Reynolds-averaged Navier-Stokes) with deep neural networks is online now. Interestingly, turns out you don’t need things like complex graph neural networks to handle adaptive / changing meshes. The preprint is available on arXiv: https://arxiv.org/abs/2109.02183

Even the “worst case” results are very accurate, here’s an example comparison:

The trained neural network yields results that resolve all necessary structures such as shocks, and has an average error of less than 0.3% for turbulent transonic cases. This is appropriate for real-world, industrial applications. Here’s are the inferred skin friction coefficent for ‘e221’:

We also directly compare to a graph neural network (GNN) approach (one that is additionally coupled with a solver), yielding a ca. 3.6x improvement in terms of RMSE across all inferred fields. This is achieved with a much simpler and faster method…

Full Paper Abstract: The present study investigates the accurate inference of Reynolds-averaged Navier-Stokes solutions for the compressible flow over aerofoils in two dimensions with a deep neural network. Our approach yields networks that learn to generate precise flow fields for varying body-fitted, structured grids by providing them with an encoding of the corresponding mapping to a canonical space for the solutions. We apply the deep neural network model to a benchmark case of incompressible flow at randomly given angles of attack and Reynolds numbers and achieve an improvement of more than an order of magnitude compared to previous work. Further, for transonic flow cases, the deep neural network model accurately predicts complex flow behaviour at high Reynolds numbers, such as shock wave/boundary layer interaction, and quantitative distributions like pressure coefficient, skin friction coefficient as well as wake total pressure profiles downstream of aerofoils. The proposed deep learning method significantly speeds up the predictions of flow fields and shows promise for enabling fast aerodynamic designs.

Learning Meaningful Controls for Fluids with GANs

Interested in GANs that learn physical spaces and are properly conditioned by input parameters? Our SIGGRAPH paper describes how to do exactly this: https://rachelcmy.github.io/den2vel/

The learning process is enabled (among others) by a discriminator that self-supervises in terms of a physical quantity such as the marker density.

That makes it possible to obtain velocities and run a simulations purely based on a marker density. We actually couldn’t rewrite the NS equations in this way, but the generator learns it from the training data.

Full paper abstract: While modern fluid simulation methods achieve high-quality simulation results, it is still a big challenge to interpret and control motion from visual quantities, such as the advected marker density. These visual quantities play an important role in user interactions: Being familiar and meaningful to humans, these quantities have a strong correlation with the underlying motion. We propose a novel data-driven conditional adversarial model that solves the challenging, and theoretically ill-posed problem of deriving plausible velocity fields from a single frame of a density field. Besides density modifications, our generative model is the first to enable the control of the results using all of the following control modalities: obstacles, physical parameters, kinetic energy, and vorticity. Our method is based on a new conditional generative adversarial neural network that explicitly embeds physical quantities into the learned latent space, and a new cyclic adversarial network design for control disentanglement. We show the high quality and versatile controllability of our results for density-based inference, realistic obstacle interaction, and sensitive responses to modifications of physical parameters, kinetic energy, and vorticity.