GAN2GAN: Generative Noise Learning for Blind Image Denoising with Single
Noisy Images
- VLM
We tackle a challenging blind image denoising problem, in which only single distinct noisy images are available for training a denoiser, and no information about noise is known, except for it being zero-mean, additive, and independent of the clean image. In such a setting, which often occurs in practice, it is not possible to train a denoiser with the standard discriminative training or with the recently developed Noise2Noise (N2N) training; the former requires the underlying clean image for the given noisy image, and the latter requires two independently realized noisy image pair for a clean image. To that end, we propose GAN2GAN (Generated-Artificial-Noise to Generated-Artificial-Noise) method that first learns a generative model that can synthesize noisy image pairs based on simulating independent realizations of the noise in given single noisy images, then iteratively trains a denoiser with those synthesized pairs, as in the N2N training. In results, we show the denoiser trained with our GAN2GAN method for the blind denoising setting achieves an impressive denoising performance; it almost approaches the performance of the standard discriminatively-trained or N2N-trained models that have more information than ours, and significantly outperforms the recent baseline for the same setting, i.e., Noise2Void, and a more conventional yet strong one, BM3D.
View on arXiv