26
0

SEE: See Everything Every Time -- Adaptive Brightness Adjustment for Broad Light Range Images via Events

Abstract

Event cameras, with a high dynamic range exceeding 120dB120dB, significantly outperform traditional embedded cameras, robustly recording detailed changing information under various lighting conditions, including both low- and high-light situations. However, recent research on utilizing event data has primarily focused on low-light image enhancement, neglecting image enhancement and brightness adjustment across a broader range of lighting conditions, such as normal or high illumination. Based on this, we propose a novel research question: how to employ events to enhance and adaptively adjust the brightness of images captured under broad lighting conditions? To investigate this question, we first collected a new dataset, SEE-600K, consisting of 610,126 images and corresponding events across 202 scenarios, each featuring an average of four lighting conditions with over a 1000-fold variation in illumination. Subsequently, we propose a framework that effectively utilizes events to smoothly adjust image brightness through the use of prompts. Our framework captures color through sensor patterns, uses cross-attention to model events as a brightness dictionary, and adjusts the image's dynamic range to form a broad light-range representation (BLR), which is then decoded at the pixel level based on the brightness prompt. Experimental results demonstrate that our method not only performs well on the low-light enhancement dataset but also shows robust performance on broader light-range image enhancement using the SEE-600K dataset. Additionally, our approach enables pixel-level brightness adjustment, providing flexibility for post-processing and inspiring more imaging applications. The dataset and source code are publicly available at:this https URL.

View on arXiv
@article{lu2025_2502.21120,
  title={ SEE: See Everything Every Time -- Adaptive Brightness Adjustment for Broad Light Range Images via Events },
  author={ Yunfan Lu and Xiaogang Xu and Hao Lu and Yanlin Qian and Pengteng Li and Huizai Yao and Bin Yang and Junyi Li and Qianyi Cai and Weiyu Guo and Hui Xiong },
  journal={arXiv preprint arXiv:2502.21120},
  year={ 2025 }
}
Comments on this paper