ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.13716
73
31

Event-Based Video Frame Interpolation With Cross-Modal Asymmetric Bidirectional Motion Fields

20 February 2025
Taewoo Kim
Yujeong Chae
Hyun-Kurl Jang
Kuk-Jin Yoon
ArXivPDFHTML
Abstract

Video Frame Interpolation (VFI) aims to generate intermediate video frames between consecutive input frames. Since the event cameras are bio-inspired sensors that only encode brightness changes with a micro-second temporal resolution, several works utilized the event camera to enhance the performance of VFI. However, existing methods estimate bidirectional inter-frame motion fields with only events or approximations, which can not consider the complex motion in real-world scenarios. In this paper, we propose a novel event-based VFI framework with cross-modal asymmetric bidirectional motion field estimation. In detail, our EIF-BiOFNet utilizes each valuable characteristic of the events and images for direct estimation of inter-frame motion fields without any approximation methods. Moreover, we develop an interactive attention-based frame synthesis network to efficiently leverage the complementary warping-based and synthesis-based features. Finally, we build a large-scale event-based VFI dataset, ERF-X170FPS, with a high frame rate, extreme motion, and dynamic textures to overcome the limitations of previous event-based VFI datasets. Extensive experimental results validate that our method shows significant performance improvement over the state-of-the-art VFI methods on various datasets. Our project pages are available at:this https URL

View on arXiv
@article{kim2025_2502.13716,
  title={ Event-Based Video Frame Interpolation With Cross-Modal Asymmetric Bidirectional Motion Fields },
  author={ Taewoo Kim and Yujeong Chae and Hyun-Kurl Jang and Kuk-Jin Yoon },
  journal={arXiv preprint arXiv:2502.13716},
  year={ 2025 }
}
Comments on this paper