https://film-net.github.io/ FILM: Frame Interpolation for Large Motion Fitsum Reda^1, Janne Kontkanen^1, Eric Tabellion^1, Deqing Sun^1, Caroline Pantofaru^1, Brian Curless^1,2 ^1Google Research ^2University of Washington ECCV 2022 Paper arXiv Video Code FILM turns near-duplicate photos into a slow motion footage that look like shot with a video camera. Abstract We present a frame interpolation algorithm that synthesizes multiple intermediate frames from two input images with large in-between motion. Recent methods use multiple networks to estimate optical flow or depth and a separate network dedicated to frame synthesis. This is often complex and requires scarce optical flow or depth ground-truth. In this work, we present a single unified network, distinguished by a multi-scale feature extractor that shares weights at all scales, and is trainable from frames alone. To synthesize crisp and pleasing frames, we propose to optimize our network with the Gram matrix loss that measures the correlation difference between feature maps. Our approach outperforms state-of-the-art methods on the Xiph large motion benchmark. We also achieve higher scores on Vimeo-90K, Middlebury and UCF101, when comparing to methods that use perceptual losses. We study the effect of weight sharing and of training with datasets of increasing motion range. Finally, we demonstrate our model's effectiveness in synthesizing high quality and temporally coherent videos on a challenging near-duplicate photos dataset. Loss Functions Ablation overview_image FILM Architecture Overview overview_image Video BibTeX @misc{reda2022film, title={FILM: Frame Interpolation for Large Motion}, author={Fitsum Reda and Janne Kontkanen and Eric Tabellion and Deqing Sun and Caroline Pantofaru and Brian Curless}, booktitle = {The European Conference on Computer Vision (ECCV)}, year={2022} } This website is borrowed from nerfies.