347

PatchNR: Learning from Small Data by Patch Normalizing Flow Regularization

Inverse Problems (IP), 2022
Abstract

Learning neural networks using only a small amount of data is an important ongoing research topic with tremendous potential for applications. In this paper, we introduce a regularizer for the variational modeling of inverse problems in imaging based on normalizing flows. Our regularizer, called patchNR, involves a normalizing flow learned on patches of very few images. In particular, the training is independent from the considered inverse problem such that the same regularizer can be used for different forward operators acting on the same class of images. By investigating the distribution of patches versus those of the whole image class, we prove that our variational model is indeed a MAP approach. Our model can be generalized to conditional patchNRs, if additional supervised information is available. Numerical examples for superresolution of material images and low-dose or limited-angle computed tomography (CT) demonstrate that our method provides high quality results among methods with similar assumptions, but requires only few data.

View on arXiv
Comments on this paper