IGDNet: Zero-Shot Robust Underexposed Image Enhancement via Illumination-Guided and Denoising

Current methods for restoring underexposed images typically rely on supervised learning with paired underexposed and well-illuminated images. However, collecting such datasets is often impractical in real-world scenarios. Moreover, these methods can lead to over-enhancement, distorting well-illuminated regions. To address these issues, we propose IGDNet, a Zero-Shot enhancement method that operates solely on a single test image, without requiring guiding priors or training data. IGDNet exhibits strong generalization ability and effectively suppresses noise while restoring illumination. The framework comprises a decomposition module and a denoising module. The former separates the image into illumination and reflection components via a dense connection network, while the latter enhances non-uniformly illuminated regions using an illumination-guided pixel adaptive correction method. A noise pair is generated through downsampling and refined iteratively to produce the final result. Extensive experiments on four public datasets demonstrate that IGDNet significantly improves visual quality under complex lighting conditions. Quantitative results on metrics like PSNR (20.41dB) and SSIM (0.860dB) show that it outperforms 14 state-of-the-art unsupervised methods. The code will be released soon.
View on arXiv@article{yan2025_2507.02445, title={ IGDNet: Zero-Shot Robust Underexposed Image Enhancement via Illumination-Guided and Denoising }, author={ Hailong Yan and Junjian Huang and Tingwen Huang }, journal={arXiv preprint arXiv:2507.02445}, year={ 2025 } }