ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.17042
74
0

Adapting Image-to-Video Diffusion Models for Large-Motion Frame Interpolation

22 December 2024
Luoxu Jin
Hiroshi Watanabe
    DiffM
    VGen
ArXivPDFHTML
Abstract

With the development of video generation models has advanced significantly in recent years, we adopt large-scale image-to-video diffusion models for video frame interpolation. We present a conditional encoder designed to adapt an image-to-video model for large-motion frame interpolation. To enhance performance, we integrate a dual-branch feature extractor and propose a cross-frame attention mechanism that effectively captures both spatial and temporal information, enabling accurate interpolations of intermediate frames. Our approach demonstrates superior performance on the Fréchet Video Distance (FVD) metric when evaluated against other state-of-the-art approaches, particularly in handling large motion scenarios, highlighting advancements in generative-based methodologies.

View on arXiv
@article{jin2025_2412.17042,
  title={ Adapting Image-to-Video Diffusion Models for Large-Motion Frame Interpolation },
  author={ Luoxu Jin and Hiroshi Watanabe },
  journal={arXiv preprint arXiv:2412.17042},
  year={ 2025 }
}
Comments on this paper