We’ve been working on new examples with our deep-learning based video super-resolution method (TecoGAN) that employ a novel spatio-temporal discriminators. Enjoy! These examples nicely highlight the huge amount of coherent detail that our method generates via a GAN-based training of the generator. And we’re of course still working on publishing the source code and trained models, coming up soon…
If you’re interested in the details, you can read the full pre-print here: https://ge.in.tum.de/publications/2019-tecogan-chu/
Or you can check out the accompanying paper video here: