With the development of video generation models has advanced significantly in recent years, we adopt large-scale image-to-video diffusion models for video frame interpolation. We present a conditional encoder designed to adapt an image-to-video model for large-motion frame interpolation. To enhance performance, we integrate a dual-branch feature extractor and propose a cross-frame attention mechanism that effectively captures both spatial and temporal information, enabling accurate interpolations of intermediate frames. Our approach demonstrates superior performance on the Fréchet Video Distance (FVD) metric when evaluated against other state-of-the-art approaches, particularly in handling large motion scenarios, highlighting advancements in generative-based methodologies.
View on arXiv@article{jin2025_2412.17042, title={ Adapting Image-to-Video Diffusion Models for Large-Motion Frame Interpolation }, author={ Luoxu Jin and Hiroshi Watanabe }, journal={arXiv preprint arXiv:2412.17042}, year={ 2025 } }