ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.00113
28
4

Event-based Continuous Color Video Decompression from Single Frames

30 November 2023
ZiYun Wang
Friedhelm Hamann
Kenneth Chaney
Wen Jiang
Martin Braschler
Kostas Daniilidis
ArXivPDFHTML
Abstract

We present ContinuityCam, a novel approach to generate a continuous video from a single static RGB image and an event camera stream. Conventional cameras struggle with high-speed motion capture due to bandwidth and dynamic range limitations. Event cameras are ideal sensors to solve this problem because they encode compressed change information at high temporal resolution. In this work, we tackle the problem of event-based continuous color video decompression, pairing single static color frames and event data to reconstruct temporally continuous videos. Our approach combines continuous long-range motion modeling with a neural synthesis model, enabling frame prediction at arbitrary times within the events. Our method only requires an initial image, thus increasing the robustness to sudden motions, light changes, minimizing the prediction latency, and decreasing bandwidth usage. We also introduce a novel single-lens beamsplitter setup that acquires aligned images and events, and a novel and challenging Event Extreme Decompression Dataset (E2D2) that tests the method in various lighting and motion profiles. We thoroughly evaluate our method by benchmarking color frame reconstruction, outperforming the baseline methods by 3.61 dB in PSNR and by 33% decrease in LPIPS, as well as showing superior results on two downstream tasks.

View on arXiv
@article{wang2025_2312.00113,
  title={ Event-based Continuous Color Video Decompression from Single Frames },
  author={ Ziyun Wang and Friedhelm Hamann and Kenneth Chaney and Wen Jiang and Guillermo Gallego and Kostas Daniilidis },
  journal={arXiv preprint arXiv:2312.00113},
  year={ 2025 }
}
Comments on this paper