42
10

Generative Inbetweening: Adapting Image-to-Video Models for Keyframe Interpolation

Abstract

We present a method for generating video sequences with coherent motion between a pair of input key frames. We adapt a pretrained large-scale image-to-video diffusion model (originally trained to generate videos moving forward in time from a single input image) for key frame interpolation, i.e., to produce a video in between two input frames. We accomplish this adaptation through a lightweight fine-tuning technique that produces a version of the model that instead predicts videos moving backwards in time from a single input image. This model (along with the original forward-moving model) is subsequently used in a dual-directional diffusion sampling process that combines the overlapping model estimates starting from each of the two keyframes. Our experiments show that our method outperforms both existing diffusion-based methods and traditional frame interpolation techniques.

View on arXiv
@article{wang2025_2408.15239,
  title={ Generative Inbetweening: Adapting Image-to-Video Models for Keyframe Interpolation },
  author={ Xiaojuan Wang and Boyang Zhou and Brian Curless and Ira Kemelmacher-Shlizerman and Aleksander Holynski and Steven M. Seitz },
  journal={arXiv preprint arXiv:2408.15239},
  year={ 2025 }
}
Comments on this paper