ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.20268
85
0

EGVD: Event-Guided Video Diffusion Model for Physically Realistic Large-Motion Frame Interpolation

26 March 2025
Ziran Zhang
Xiaohui Li
Yihao Liu
Yujin Wang
Yueting Chen
Tianfan Xue
Shi Guo
    DiffM
    VGen
ArXivPDFHTML
Abstract

Video frame interpolation (VFI) in scenarios with large motion remains challenging due to motion ambiguity between frames. While event cameras can capture high temporal resolution motion information, existing event-based VFI methods struggle with limited training data and complex motion patterns. In this paper, we introduce Event-Guided Video Diffusion Model (EGVD), a novel framework that leverages the powerful priors of pre-trained stable video diffusion models alongside the precise temporal information from event cameras. Our approach features a Multi-modal Motion Condition Generator (MMCG) that effectively integrates RGB frames and event signals to guide the diffusion process, producing physically realistic intermediate frames. We employ a selective fine-tuning strategy that preserves spatial modeling capabilities while efficiently incorporating event-guided temporal information. We incorporate input-output normalization techniques inspired by recent advances in diffusion modeling to enhance training stability across varying noise levels. To improve generalization, we construct a comprehensive dataset combining both real and simulated event data across diverse scenarios. Extensive experiments on both real and simulated datasets demonstrate that EGVD significantly outperforms existing methods in handling large motion and challenging lighting conditions, achieving substantial improvements in perceptual quality metrics (27.4% better LPIPS on Prophesee and 24.1% on BSRGB) while maintaining competitive fidelity measures. Code and datasets available at:this https URL.

View on arXiv
@article{zhang2025_2503.20268,
  title={ EGVD: Event-Guided Video Diffusion Model for Physically Realistic Large-Motion Frame Interpolation },
  author={ Ziran Zhang and Xiaohui Li and Yihao Liu and Yujin Wang and Yueting Chen and Tianfan Xue and Shi Guo },
  journal={arXiv preprint arXiv:2503.20268},
  year={ 2025 }
}
Comments on this paper