ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.12169
36
0

Towards a General-Purpose Zero-Shot Synthetic Low-Light Image and Video Pipeline

16 April 2025
Joanne Lin
Crispian Morris
Ruirui Lin
Fan Zhang
David Bull
Nantheera Anantrasirichai
ArXivPDFHTML
Abstract

Low-light conditions pose significant challenges for both human and machine annotation. This in turn has led to a lack of research into machine understanding for low-light images and (in particular) videos. A common approach is to apply annotations obtained from high quality datasets to synthetically created low light versions. In addition, these approaches are often limited through the use of unrealistic noise models. In this paper, we propose a new Degradation Estimation Network (DEN), which synthetically generates realistic standard RGB (sRGB) noise without the requirement for camera metadata. This is achieved by estimating the parameters of physics-informed noise distributions, trained in a self-supervised manner. This zero-shot approach allows our method to generate synthetic noisy content with a diverse range of realistic noise characteristics, unlike other methods which focus on recreating the noise characteristics of the training data. We evaluate our proposed synthetic pipeline using various methods trained on its synthetic data for typical low-light tasks including synthetic noise replication, video enhancement, and object detection, showing improvements of up to 24\% KLD, 21\% LPIPS, and 62\% AP50−95_{50-95}50−95​, respectively.

View on arXiv
@article{lin2025_2504.12169,
  title={ Towards a General-Purpose Zero-Shot Synthetic Low-Light Image and Video Pipeline },
  author={ Joanne Lin and Crispian Morris and Ruirui Lin and Fan Zhang and David Bull and Nantheera Anantrasirichai },
  journal={arXiv preprint arXiv:2504.12169},
  year={ 2025 }
}
Comments on this paper