https://generative-dynamics.github.io/ Generative Image Dynamics Zhengqi Li, Richard Tucker, Noah Snavely, Aleksander Holynski Google Research Paper arXiv Demo Our approach models an image-space prior on scene dynamics that can be used to turn a single image into a seamless looping video or an interactive dynamic scene. Our method automatically turns single still images into seamless looping videos. Abstract We present an approach to modeling an image-space prior on scene dynamics. Our prior is learned from a collection of motion trajectories extracted from real video sequences containing natural, oscillating motion such as trees, flowers, candles, and clothes blowing in the wind. Given a single image, our trained model uses a frequency-coordinated diffusion sampling process to predict a per-pixel long-term motion representation in the Fourier domain, which we call a neural stochastic motion texture. This representation can be converted into dense motion trajectories that span an entire video. Along with an image-based rendering module, these trajectories can be used for a number of downstream applications, such as turning still images into seamlessly looping dynamic videos, or allowing users to realistically interact with objects in real pictures. With stochastic motion textures, we can simulate response of object dynamics to an interactive user excitation. Try it yourself! Click and drag a point on the image below, release to see how the scene moves! [Demo requires browser with WebGL2 support.] Try a different image by clicking on the icons below: [image][image][image] We can minify (top) or magnify (bottom) animated motions by adjusting the amplitude of motion textures. Slow-motion videos can be generated by interpolating predicted motion textures. Acknowledgements Thanks to Rick Szeliski, Andrew Liu, Qianqian Wang, Boyang Deng, Xuan Luo, and Lucy Chai for helpful proofreading, comments and discussions. This website is borrowed from nerfies. Thanks Keunhong!