28
0

A Survey of 3D Reconstruction with Event Cameras: From Event-based Geometry to Neural 3D Rendering

Abstract

Event cameras have emerged as promising sensors for 3D reconstruction due to their ability to capture per-pixel brightness changes asynchronously. Unlike conventional frame-based cameras, they produce sparse and temporally rich data streams, which enable more accurate 3D reconstruction and open up the possibility of performing reconstruction in extreme environments such as high-speed motion, low light, or high dynamic range scenes. In this survey, we provide the first comprehensive review focused exclusively on 3D reconstruction using event cameras. The survey categorises existing works into three major types based on input modality - stereo, monocular, and multimodal systems, and further classifies them by reconstruction approach, including geometry-based, deep learning-based, and recent neural rendering techniques such as Neural Radiance Fields and 3D Gaussian Splatting. Methods with a similar research focus were organised chronologically into the most subdivided groups. We also summarise public datasets relevant to event-based 3D reconstruction. Finally, we highlight current research limitations in data availability, evaluation, representation, and dynamic scene handling, and outline promising future research directions. This survey aims to serve as a comprehensive reference and a roadmap for future developments in event-driven 3D reconstruction.

View on arXiv
@article{xu2025_2505.08438,
  title={ A Survey of 3D Reconstruction with Event Cameras: From Event-based Geometry to Neural 3D Rendering },
  author={ Chuanzhi Xu and Haoxian Zhou and Langyi Chen and Haodong Chen and Ying Zhou and Vera Chung and Qiang Qu },
  journal={arXiv preprint arXiv:2505.08438},
  year={ 2025 }
}
Comments on this paper