You Xie and Rachel Chu just successfully presented their paper “tempoGAN: A Temporally Coherent, Volumetric GAN for Super-resolution Fluid Flow” at SIGRGAPH 2018. You can enjoy their full presentation via the YouTube video below.
Presentation:
More info regarding the SIGGRAPH 2018 technical papers:
tempoGAN paper abstract:
We propose a temporally coherent generative model addressing the super-resolution problem for fluid flows. Our work represents the first approach to synthesize four-dimensional physics fields with neural networks. Based on a conditional generative adversarial network that is designed for the inference of three-dimensional volumetric data, our model generates consistent and detailed results by using a novel temporal discriminator, in addition to the commonly used spatial one. Our experiments show that the generator is able to infer more realistic high-resolution details by using additional physical quantities, such as low-resolution velocities or vorticities. Besides improvements in the training process and in the generated outputs, these inputs offer means for artistic control as well. We additionally employ a physics-aware data augmentation step, which is crucial to avoid overfitting and to reduce memory requirements. In this way, our network learns to generate advected quantities with highly detailed, realistic, and temporally coherent features. Our method works instantaneously, using only a single time-step of low-resolution fluid data. We demonstrate the abilities of our method using a variety of complex inputs and applications in two and three dimensions.