We’re very excited to report that our P3D Transformer was accepted at ICLR https://openreview.net/forum?id=8UdCE5nhFl

  • In the paper we introduce a scalable hybrid CNN–Transformer architecture that pushes neural surrogate modeling into the regime of truly high-resolution 3D simulations. By combining fast local convolutions with carefully designed attention and an optional global context model, the P3D architecture overcomes previous memory and compute barriers that have limited Transformers for PDE problems.
  • P3D is trained on small 3D “crops”, but seamlessly assembles them into coherent, large-scale solutions. We show that P3D can learn multiple different types of PDE dynamics in 3D simultaneously and that it outperforms state-of-the-art neural operators and transformers in both accuracy and efficiency.
  • Beyond deterministic prediction, P3D also works as a probabilistic generative model: trained via flow-matching diffusion, it produces high-fidelity samples of turbulent channel flows across varying Reynolds numbers, capturing correct velocity profiles and higher-order statistics (while being orders-of-magnitude faster than DNS).

Code and pre-trained models will be up soon!