Zero-Shot Low Light Image Enhancement with Diffusion Prior

In this paper, we present a simple yet highly effective "free lunch" solution for low-light image enhancement (LLIE), which aims to restore low-light images as if acquired in well-illuminated environments. Our method necessitates no optimization, training, fine-tuning, text conditioning, or hyperparameter adjustments, yet it consistently reconstructs low-light images with superior fidelity. Specifically, we leverage a pre-trained text-to-image diffusion prior, learned from training on a large collection of natural images, and the features present in the model itself to guide the inference, in contrast to existing methods that depend on customized constraints. Comprehensive quantitative evaluations demonstrate that our approach outperforms SOTA methods on established datasets, while qualitative analyses indicate enhanced color accuracy and the rectification of subtle chromatic deviations. Furthermore, additional experiments reveal that our method, without any modifications, achieves SOTA-comparable performance in the auto white balance (AWB) task.
View on arXiv@article{cho2025_2412.13401, title={ Zero-Shot Low Light Image Enhancement with Diffusion Prior }, author={ Joshua Cho and Sara Aghajanzadeh and Zhen Zhu and D. A. Forsyth }, journal={arXiv preprint arXiv:2412.13401}, year={ 2025 } }