We consider the compressed sensing problem, where the object is to be recovered from incomplete measurements ; here the sensing matrix is an random matrix with iid Gaussian entries and . A popular method of sparsity-promoting reconstruction is -penalized least-squares reconstruction (aka LASSO, Basis Pursuit). It is currently popular to consider the strict sparsity model, where the object is nonzero in only a small fraction of entries. In this paper, we instead consider the much more broadly applicable -sparsity model, where is sparse in the sense of having norm bounded by for some fixed and . We study an asymptotic regime in which and both tend to infinity with limiting ratio , both in the noisy () and noiseless () cases. Under weak assumptions on , we are able to precisely evaluate the worst-case asymptotic minimax mean-squared reconstruction error (AMSE) for penalized least-squares: min over penalization parameters, max over -sparse objects . We exhibit the asymptotically least-favorable object (hardest sparse signal to recover) and the maximin penalization. Our explicit formulas unexpectedly involve quantities appearing classically in statistical decision theory. Occurring in the present setting, they reflect a deeper connection between penalized minimization and scalar soft thresholding. This connection, which follows from earlier work of the authors and collaborators on the AMP iterative thresholding algorithm, is carefully explained. Our approach also gives precise results under weak- ball coefficient constraints, as we show here.
View on arXiv