Source code for Navier-Stokes shape optimizations with DL surrogates

Our paper on deep learning surrogates for Navier-Stokes simulations was accepted for publication at the Journal of Fluid Mechanics by now (https://doi.org/10.1017/jfm.2021.398). We’ve also just released the full source code for the project: https://github.com/tum-pbs/dl-surrogates

It contains everything’s that necessary to train a neural network to perform super fast shape optimizations to reduce the drag of a level-set based shape immersed in a moving fluid. The neural network is inherently differentiable, and very fast to evaluate. Hence, the trained networks represent a great building block for inverse problems. For completeness, here’s the full abstract of the paper.

Abstract: Efficiently predicting the flowfield and load in aerodynamic shape optimisation remains a highly challenging and relevant task. Deep learning methods have been of particular interest for such problems, due to their success for solving inverse problems in other fields. In the present study, U-net based deep neural network (DNN) models are trained with high-fidelity datasets to infer flow fields, and then employed as surrogate models to carry out the shape optimisation problem, i.e. to find a drag minimal profile with a fixed cross-section area subjected to a two-dimensional steady laminar flow. A level-set method as well as Bezier-curve method are used to parameterise the shape, while trained neural networks in conjunction with automatic differentiation are utilized to calculate the gradient flow in the optimisation framework. The optimised shapes and drag force values calculated from the flowfields predicted by DNN models agree well with reference data obtained via a Navier-Stokes solver and from the literature, which demonstrates that the DNN models are capable of predicting not only flowfield but also yield satisfactory aerodynamic forces. This is particularly promising as the DNNs were not specifically trained to infer aerodynamic forces. In conjunction with the fast runtime, the DNN-based optimisation framework shows promise for general aerodynamic design problems.

CVPR Papers Available Online Now

We’re happy to report that all three CVPR papers are online now. They cover a wide range of topics, from differentiable physics and rendering (for fluids), over learning collision free spaces (for cloth) to dynamics scenes (for neural rendering).

Paper Detailing Our Latest ConvNet-based Model for WeatherBench is Online

Our own entry in the WeatherBench benchmark is now published in the Journal of Advances in Modeling Earth Systems. It outperforms existing works with an RMSE of 268 and 499 for 3 and 5 day Z500 forecasts, respectively. It’s also at least on-par with a full traditional model running at a similar resolution. That being said – it’s still clearly falling behind the operational forecasting reference. Hopefully, it will inspire more people to join the WeatherBench challenge, and further improve the forecasts!

The full article can be read here, and the current leaderboard for WeatherBench can be found on the corresponding GitHub page.

Paper Abstract: Numerical weather prediction has traditionally been based on the models that discretize the dynamical and physical equations of the atmosphere. Recently, however, the rise of deep learning has created increased interest in purely data‐driven medium‐range weather forecasting with first studies exploring the feasibility of such an approach. To accelerate progress in this area, the WeatherBench benchmark challenge was defined. Here, we train a deep residual convolutional neural network (Resnet) to predict geopotential, temperature and precipitation at 5.625° resolution up to 5 days ahead. To avoid overfitting and improve forecast skill, we pretrain the model using historical climate model output before fine‐tuning on reanalysis data. The resulting forecasts outperform previous submissions to WeatherBench and are comparable in skill to a physical baseline at similar resolution. We also analyze how the neural network makes its predictions and find that the model has learned reasonable physically reasonable correlations.

CORDIS Article on ERC Starting Grant realFlow

Our ERC Starting Grant “realFlow” is finished by now (it was concluded in April 2020), but nonetheless good to see it on the CORDIS website:

https://cordis.europa.eu/article/id/429170-teaching-neural-networks-to-go-with-the-flow?WT.mc_id=exp

The nice image there is from our temporally-coherent fluid GAN (tempoGAN), published in 2018 at SIGGRAPH. Interestingly, since then few works were able to handle 4D data sets (3D volumes over time) while taking into account how the learned functions should change over time.

The cutout above is from our largest example, with a resolution of 1024 × 720 × 720 cells over 200 time steps. That means the CNN generated a total number of 6,794,772,480,000 cells (i.e., more than 6 trillion cells) for this sequence.

Three accepted papers at CVPR 2021

We’re happy to report that three of our papers have been accepted to the CVPR 2021 conference, two of them being orals. Details will follow in the next weeks, but as a preview we have:

  • Global Transport for Fluid Reconstruction with Learned Self-Supervision (oral), together with the CGL at ETH Zurich, congratulations Erik!
  • Neural Scene Graphs for Dynamic Scenes (oral), together with AlgoLux and the Princeton CI lab, congratulations Julian!
  • Self-Supervised Collision Handling via Generative 3D Garment Models for Virtual Try-On, together with the with the Multimodal Simulation Lab at Universidad Rey Juan Carlos, congratulations Igor!

Preprints and details will be available soon.

Talk about Differentiable Physics Simulations for Deep Learning

Nils Thuerey recently gave a talk at the LLNL (https://www.llnl.gov/) about Differentiable Physics Simulations for Deep Learning. While we’re preparing the next release of our differentiable simulation framework PhiFlow (https://github.com/tum-pbs/PhiFlow), you can check out the talk here:

Talk abstract: In this talk I will focus on the possibilities that arise from recent advances in the area of deep learning for physical simulations. In this context, especially the Navier-Stokes equations represent an interesting and challenging advection-diffusion PDE that poses a variety of challenges for deep learning methods.

In particular, I will focus on differentiable physics solvers within the larger field of differentiable programming. Differentiable solvers are very powerful tools to guide deep learning processes, and support finding desirable solutions. The existing numerical methods for efficient solvers can be leveraged within learning tasks to provide crucial information in the form of reliable gradients to update the weights of a neural networks. Interestingly, it turns out to be beneficial to combine supervised and physics-based approaches. The former poses a much simpler learning task by providing explicit reference data that is typically pre-computed. Physics-based learning on the other hand can provide gradients for a larger space of states that are only encountered during training runs. Here, differentiable solvers are particularly powerful to, e.g., provide neural networks with feedback about how inferred solutions influence the long-term behavior of a physical model.

I will demonstrate this concept with several examples from learning to reduce numerical errors, over long-term planning and control, to generalization. I will conclude by discussing current limitations and by giving an outlook about promising future directions.

Happy New Year (somewhat belatedly…) – looking back at a successful year 2020

Despite being a challenging year due to various non-research related reasons (Covid, anyone?), the TUM P.B.S. group can celebrate a very successful year. We’ve had a very nice series of publications, among others with papers at the NeurIPS, ICML and ICLR conferences.

Here’s a quick re-cap of what happened:

And despite being online for a while, our AIAA journal paper Deep Learning Methods for Reynolds-Averaged Navier-Stokes Simulations of Airfoil Flows also finally appeared in 2020.

And of course we have quite a list of exciting works in progress, stay tuned … 😃

Solver-in-the-Loop – Deep Learning via Differentiable PDEs at NeurIPS’20

Our Paper on deep learning algorithms interacting with differentiable PDE solvers was just successfully presented at NeurIPS. And just in time for the conference, we also finished uploading the last piece of the corresponding source code release.

An extended version of our CG Solver-in-the-Loop results from our NeurIPS’20 paper is finally online at: https://github.com/tum-pbs/CG-Solver-in-the-Loop

The main code for the paper is also available at: https://github.com/tum-pbs/Solver-in-the-Loop

This is the full abstract of the paper: Finding accurate solutions to partial differential equations (PDEs) is a crucial task in all scientific and engineering disciplines. It has recently been shown that machine learning methods can improve the solution accuracy by correcting for effects not captured by the discretized PDE. We target the problem of reducing numerical errors of iterative PDE solvers and compare different learning approaches for finding complex correction functions. We find that previously used learning approaches are significantly outperformed by methods that integrate the solver into the training loop and thereby allow the model to interact with the PDE during training. This provides the model with realistic input distributions that take previous corrections into account, yielding improvements in accuracy with stable rollouts of several hundred recurrent evaluation steps and surpassing even tailored supervised variants. We highlight the performance of the differentiable physics networks for a wide variety of PDEs, from non-linear advection-diffusion systems to three-dimensional Navier-Stokes flows.

Additional details can be found on the project page.

Talk from MIT’s CS Seminar available online now

Nils Thuerey’s talk at MIT’s “Distinguished Seminar Series in Computational Science and Engineering” is online now at:

And here’s the offical/original talk info: https://cse.mit.edu/events/thuerey22oct20seminar/

Title: Differentiable Physics Simulations for Deep Learning Algorithms

Abstract: Differentiable physics solvers (from the broader field of differentiable programming) show particular promise for including prior knowledge into machine learning algorithms. Differentiable operators were shown to be powerful tools to guide deep learning processes, and PDEs provide a wide range of components to build such operators. They also represent a natural way for traditional solvers and deep learning methods to coexist: Using PDE solvers as differentiable operators in neural networks allows us to leverage existing numerical methods for efficient solvers, e.g., to provide reliable and flexible gradients to update the weights during a learning run.

Interestingly, it turns out to be beneficial to combine “traditional” supervised and physics-based approaches. The former poses a much more straightforward and more stable learning task by providing explicit reference data, while physics-based learning can provide gradients for a larger space of states that are only encountered at training time. Here, differentiable solvers are particularly powerful, e.g., to provide neural networks with feedback about how inferred solutions influence a physical model’s long-term behavior. I will show and discuss examples with various advection-diffusion type PDEs, among others the Navier-Stokes equations for fluids, for different learning applications. These demonstrations will highlight the properties and capabilities of PDE-powered deep neural networks and serve as a starting point for discussing future developments.

Physics-based Deep Learning Activities at NeurIPS 2020

In addition to our paper at the NeurIPS 2020 main conference (which targets deep learning via differentiable PDE solvers for numerical error reduction) we are excited about contributions to the following four NeurIPS workshops. Details will follow over the course of the next weeks, but these workshops very nicely align with our goals to fuse deep learning, numerical methods and physical simulations as seamlessly as possible. E.g., we will present our work on shape optimizations for Navier-Stokes flows as well as our differentiable physics framework phiflow.

For now, we can highly recommend checking out the workshops themselves:

  • Differentiable Vision, Graphics, and Physics in Machine Learning
    http://montrealrobotics.ca/diffcvgp/
    Organizers: Krishna Jatavallabhula , Kelsey Allen , Victoria Dean , Johanna Hansen , Shuran Song , Florian Shkurti , Liam Paull , Derek Nowrouzezahrai , Josh Tenenbaum
  • Interpretable Inductive Biases and Physically Structured Learning
    https://inductive-biases.github.io/
    Organizers: Shirley Ho , Michael Lutter , Alexander Terenin , Lei Wang
  • Machine Learning for Engineering Modeling, Simulation, and Design
    https://ml4eng.github.io/
    Organizers: Alex Beatson , Priya L. Donti , Amira Abdel-Rahman , Stephan Hoyer , Rose Yu , J. Zico Kolter , Ryan P. Adam
  • Machine Learning and the Physical Sciences https://ml4physicalsciences.github.io/2020/
    Organizers: Atılım Güneş Baydin , Juan Felipe Carrasquilla , Adji Bousso Dieng , Karthik Kashinath , Gilles Louppe , Brian Nord , Michela Paganini , Savannah Thais

Shape Optimization for Fluids via Learned Surrogate Models

Our paper “Numerical investigation of minimum drag profiles in laminar flow using deep learning surrogates” is online now as a preprint. It targets optimizing shapes by using trained deep neural networks models that infer manifolds of Navier-Stokes solutions. We evaluate accuracy and performance of using pretrained models to minimize the drag for shapes immersed in a moving fluid in low Reynolds number regimes.

Preprint on arXiv
Project page

Full abstract: Efficiently predicting the flowfield and load in aerodynamic shape optimisation remains a highly challenging and relevant task. Deep learning methods have been of particular interest for such problems, due to their success for solving inverse problems in other fields. In the present study, U-net based deep neural network (DNN) models are trained with high-fidelity datasets to infer flow fields, and then employed as surrogate models to carry out the shape optimisation problem, i.e. to find a drag minimal profile with a fixed cross-section area subjected to a two-dimensional steady laminar flow. A level-set method as well as B{\’e}zier-curve method are used to parameterise the shape, while trained neural networks in conjunction with automatic differentiation are utilized to calculate the gradient flow in the optimisation framework. The optimised shapes and drag force values calculated from the flowfields predicted by DNN models agree well with reference data obtained via a Navier-Stokes solver and from the literature, which demonstrates that the DNN models are capable of predicting not only flowfield but also yield satisfactory aerodynamic forces. This is particularly promising as the DNNs were not specifically trained to infer aerodynamic forces. In conjunction with the fast runtime, the DNN-based optimisation framework shows promise for general aerodynamic design problems.

Improved and Controllable Latent-Space Physics Predictions

Our paper on improved and controllable latent-space physics predictions is set to be shown at SCA 2020. It builds on our previous work from EG’19 (Latent-space Physics: Towards Learning the Temporal Evolution of Fluid Flow, Deep Fluids: A Generative Network for Parameterized Fluid Simulations) to improve temporal predictions within the LSTM neural network. In addition, it controls the latent-space content to allow for modifications and improved long-term stability. It will be shown at SCA 2020 soon.

The pre-print and repository can be found at here.

Full abstract: We propose an end-to-end trained neural network architecture to robustly predict the complex dynamics of fluid flows with high temporal stability. We focus on single-phase smoke simulations in 2D and 3D based on the incompressible Navier-Stokes (NS) equations, which are relevant for a wide range of practical problems. To achieve stable predictions for long-term flow sequences, a convolutional neural network (CNN) is trained for spatial compression in combination with a temporal prediction network that consists of stacked Long Short-Term Memory (LSTM) layers. Our core contribution is a novel latent space subdivision (LSS) to separate the respective input quantities into individual parts of the encoded latent space domain. This allows to distinctively alter the encoded quantities without interfering with the remaining latent space values and hence maximizes external control. By selectively overwriting parts of the predicted latent space points, our proposed method is capable to robustly predict long-term sequences of complex physics problems. In addition, we highlight the benefits of a recurrent training on the latent space creation, which is performed by the spatial compression network.

racecar Training for Improved Generalization

Our paper on improving neural network generalization via a forward-backward pass is also finally online, together with a first code example. A common question that we get about this project: “why racecar“? This is worth explaining here in a bit more detail: it’s not about the speed of the method, but rather racecar is a nice palindrome. Hence, if you reverse the word, you still have “racecar”. Our training approach also makes use of a reversed neural network architecture, re-using all existent building blocks of the network and their weights, somewhat similar to a palindrome. Hence the name. Interestingly, this reverse structure yields an embedding of singular vectors into the weight matrices, and improves performance for new tasks, as we show for a variety of classification and generation tasks in our paper.

Paper Abstract:
We propose a novel training approach for improving the generalization in neural networks. We show that in contrast to regular constraints for orthogonality, our approach represents a data-dependent orthogonality constraint, and is closely related to singular value decompositions of the weight matrices. We also show how our formulation is easy to realize in practical network architectures via a reverse pass, which aims for reconstructing the full sequence of internal states of the network. Despite being a surprisingly simple change, we demonstrate that this forward-backward training approach, which we refer to as racecar training, leads to significantly more generic features being extracted from a given data set. Networks trained with our approach show more balanced mutual information between input and output throughout all layers, yield improved explainability and, exhibit improved performance for a variety of tasks and task transfers.

Medium-range weather forecasting with deep learning

Our new paper “Purely data-driven medium-range weather forecasting achieves comparable skill to physical models at similar resolution” is available now on arXiv: https://arxiv.org/abs/2008.08626

We show that with enough data, a deep-learning based model can actually compete and in some cases outperform established physical models (e.g., IFS forecasts for 210km resolution). We show how such models can be trained based on the WeatherBench data set, that they contain plausible learned structures, and also fare well for challenging fields such precipitation. At the same time, they illustrate that it will be very difficult to increase the performance only with the data that is currently available.

Full abstract: Numerical weather prediction has traditionally been based on physical models of the atmosphere. Recently, however, the rise of deep learning has created increased interest in purely data-driven medium-range weather forecasting with first studies exploring the feasibility of such an approach. Here, we train a significantly larger model than in previous studies to predict geopotential, temperature and precipitation up to 5 days ahead and achieve comparable skill to a physical model run at similar horizontal resolution. Crucially, we pretrain our models on historical climate model output before fine-tuning them on the reanalysis data. We also analyze how the neural network creates its predictions and find that, with some exceptions, it is compatible with physical reasoning. Our results indicate that, given enough training data, data-driven models can compete with physical models. At the same time, there is likely not enough data to scale this approach to the resolutions of current operational models.

Differentiable Physics Simulations for Deep Learning: Paper & Overview Talk online

We’re happy to report that our paper on using differentiable physics to reduce errors in PDEs is online now, and a corresponding overview talk is also available now:
– Solver-in-the-Loop: Learning from Differentiable Physics to Interact with Iterative PDE-Solvers
Differentiable Physics Simulations for Deep Learning, Talk by Nils Thuerey

Our results demonstrate that Differentiable Physics are a powerful tool, and they neatly fit into the current larger deep learning trend of generic “Differentiable Programming”. They not only yield very good minimizers in terms of well-trained neural networks: a nice side effect is that they allow for leveraging all the existing powerful numerical methods that exist for physical simulations, and employ them to improve training deep neural nets.

Solver-in-the-Loop Paper Abstract: Finding accurate solutions to partial differential equations (PDEs) is a crucial task in all scientific and engineering disciplines. It has recently been shown that machine learning methods can improve the solution accuracy by correcting for effects not captured by the discretized PDE. We target the problem of reducing numerical errors of iterative PDE solvers and compare different learning approaches for finding complex correction functions. We find that previously used learning approaches are significantly outperformed by methods that integrate the solver into the training loop and thereby allow the model to interact with the PDE during training. This provides the model with realistic input distributions that take previous corrections into account, yielding improvements in accuracy with stable rollouts of several hundred recurrent evaluation steps and surpassing even tailored supervised variants. We highlight the performance of the differentiable physics networks for a wide variety of PDEs, from non-linear advection-diffusion systems to three-dimensional Navier-Stokes flows.

Final Version of “Learning Similarity Metrics for Numerical Simulations” Online

We’re happy to report that the final version of our paper on “Learning Similarity Metrics for Numerical Simulations” to be presented at the International Conference on Machine Learning (ICML) is online now. We propose learning a metric for data produced by numerical simulations, i.e. PDEs such as Navier-Stokes, and a way to train Siamese networks with a correlation-based loss to improve the inference of similarities. The resulting deep learning based metric outperforms simpler metrics and other learned metrics such as LPIPS.

Assessing similarity for complex data is is a fundamental problem in all computational disciplines ranging from simulations of blood flow to aircraft design. Many practical problems rely on highly complex PDEs, where small perturbations in the input drastically alter the solutions. Regular vector space metrics like the L² distance are unreliable as they perform an element-wise comparison, and thus cannot compare contextual information or structures on different scales. Our approach, dubbed LSiM, employs convolutional neural networks (CNNs) as a method to extract and compare more meaningful features from a set of two simulation frames.

You can check out:
-the updated pre-print on arXiv 2002.07863 ,
– our website with further details, code, data etc.,
– or the full list of accepted ICML 2020 papers.

GANs for Temporal Self-Supervision of Videos

We typically focus on deep-learning methods for physical data, with a particular emphasis on Navier-Stokes & fluids. However, beyond latent-space simulation algorithms and learning with differentiable solvers, generative models have also been a central theme of our work.

Motivated by time-dependent problems from the physics area, we especially focus on spatio-temporal data such as videos. Here, self-supervision in space, as well as time, has shown lots of promise, e.g., in the form of the TecoGAN model, which can handle video super-resolution and unpaired video translation, among others. This is the video of a talk given at the CLIP workshop at CVPR 2020, where we demonstrate generative adversarial networks for video super-resolution, as well as unpaired video translation. In addition, we’ve targeted improved evaluation metrics for video content. In particular, Nils highlights our choice of a perceptual metric (such as LPIPS), in addition to a temporal perceptual evaluation (tLP) and a motion estimate (tOF). We’ve tested these across a range of examples and verified their rankings with user studies.

Here you can see a part of the perceptual evaluation from our user studies:

Further details:
Talk on YouTube
Learning Temporal Coherence via Self-Supervision for GAN-based Video Generation (TecoGAN)
TecoGAN source code

Source Code and Video for Learning Convolutions in Point-based Representations online

The video and full implementation for our ICLR 2020 paper “Lagrangian Fluid Simulation with Continuous Convolutions” are online now. It enables flexible and efficient learning of dynamics, e.g., for Lagrangian Navier-Stokes solves similar to Smoothed Particle Hydrodynamics (SPH). But instead of analytic kernel formulations like they’re used in SPH, our method learns the dynamics from data.

Full Abstract: We present an approach to Lagrangian fluid simulation with a new type of convo- lutional network. Our networks process sets of moving particles, which describe fluids in space and time. Unlike previous approaches, we do not build an ex- plicit graph structure to connect the particles but use spatial convolutions as the main differentiable operation that relates particles to their neighbors. To this end we present a simple, novel, and effective extension of N-D convolutions to the continuous domain. We show that our network architecture can simulate differ- ent materials, generalizes to arbitrary collision geometries, and can be used for inverse problems. In addition, we demonstrate that our continuous convolutions outperform prior formulations in terms of accuracy and speed.

WeatherBench: Benchmark dataset for data-driven weather forecasting available online

It is worth highlighting that our benchmark dataset for data-driven weather forecasting, i.e. WeatherBench, is fully available online now. You can

Here’s the full abstract: Data-driven approaches, most prominently deep learning, have become powerful tools for prediction in many domains. A natural question to ask is whether data-driven methods could also be used for numerical weather prediction. First studies show promise but the lack of a common dataset and evaluation metrics make inter-comparison between studies difficult. Here we present a benchmark dataset for data-driven medium-range weather forecasting, a topic of high scientific interest for atmospheric and computer scientists alike. We provide data derived from the ERA5 archive that has been processed to facilitate the use in machine learning models. We propose a simple and clear evaluation metric which will enable a direct comparison between different methods. Further, we provide baseline scores from simple linear regression techniques, deep learning models as well as purely physical forecasting models. We hope that this dataset will accelerate research in data-driven weather forecasting.

CVPR Paper on Physics-based Reconstructions of 3D Scans of Deformable Objects online now

Our CVPR paper titled “Correspondence-Free Material Reconstruction using Sparse Surface Constraints” is online now with a pre-print, source code, and the corresponding video! Enjoy. The paper proposes a method to optimize for solutions of a finite-element elastodynamics solver that match a set of given observations (in the form of a depth-video). This method does not employ any neural networks or deep learning methods for a change, but is nonetheless closely related due to its gradient-based optimization scheme.

Full abstract: We address the problem to infer physical material parameters and boundary conditions from the observed motion of a homogeneous deformable object via the solution of an inverse problem. Parameters are estimated from potentially unreliable real-world data sources such as sparse observations without correspondences. We introduce a novel Lagrangian-Eulerian optimization formulation, including a cost function that penalizes differences to observations during an optimization run. This formulation matches correspondence-free, sparse observations from a single-view depth sequence with a finite element simulation of deformable bodies. In conjunction with an efficient hexahedral discretization and a stable, implicit formulation of collisions, our method can be used in demanding situation to recover a variety of material parameters, ranging from Young’s modulus and Poisson ratio to gravity and stiffness damping, and even external boundaries. In a number of tests using synthetic datasets and real-world measurements, we analyse the robustness of our approach and the convergence behavior of the numerical optimization scheme.