We consider inverse problems where the conditional distribution of the observation given the latent variable of interest (also known as the forward model) is known, and we have access to a data set in which multiple instances of and are both observed. In this context, algorithm unrolling has become a very popular approach for designing state-of-the-art deep neural network architectures that effectively exploit the forward model. We analyze the statistical complexity of the gradient descent network (GDN), an algorithm unrolling architecture driven by proximal gradient descent. We show that the unrolling depth needed for the optimal statistical performance of GDNs is of order , where is the sample size, and is the convergence rate of the corresponding gradient descent algorithm. We also show that when the negative log-density of the latent variable has a simple proximal operator, then a GDN unrolled at depth can solve the inverse problem at the parametric rate . Our results thus also suggest that algorithm unrolling models are prone to overfitting as the unrolling depth increases. We provide several examples to illustrate these results.
View on arXiv