We’re happy to introduce P3D: our PDE-Transformer architecture in 3 dimensions by Benjamin, Georg & Co. Demonstrated for unprecedented 512^3 resolutions! That means the Transformer produces over 400 million degrees of freedom in one go 😀 a regime that was previously out of reach for neural surrogates. You can check out the full paper on arXiv now: http://arxiv.org/abs/2509.10186
Of course, the model is diffusion-ready, as we show with a probabilistic, turbulent channel flow scenario. We also showcase a multi-PDE scenario with wide-ranging dynamics.
Key for the approach is a modular, pre-trained PDE-Transformer, multiple instances of which are combined or optionally via a “context model”. Paving the way for large-scale neural simulations in the future!
Full abstract: We present a scalable framework for learning deterministic and probabilistic neural surrogates for high-resolution 3D physics simulations. We introduce a hybrid CNN-Transformer backbone architecture targeted for 3D physics simulations, which significantly outperforms existing architectures in terms of speed and accuracy. Our proposed network can be pretrained on small patches of the simulation domain, which can be fused to obtain a global solution, optionally guided via a fast and scalable sequence-to-sequence model to include long-range dependencies. This setup allows for training large-scale models with reduced memory and compute requirements for high-resolution datasets. We evaluate our backbone architecture against a large set of baseline methods with the objective to simultaneously learn the dynamics of 14 different types of PDEs in 3D. We demonstrate how to scale our model to high-resolution isotropic turbulence with spatial resolutions of up to 512^3. Finally, we demonstrate the versatility of our network by training it as a diffusion model to produce probabilistic samples of highly turbulent 3D channel flows across varying Reynolds numbers, accurately capturing the underlying flow statistics.