69
23

Simple Error Bounds for Regularized Noisy Linear Inverse Problems

Abstract

Consider estimating a structured signal x0\mathbf{x}_0 from linear, underdetermined and noisy measurements y=Ax0+z\mathbf{y}=\mathbf{A}\mathbf{x}_0+\mathbf{z}, via solving a variant of the lasso algorithm: x^=argminx{yAx2+λf(x)}\hat{\mathbf{x}}=\arg\min_\mathbf{x}\{ \|\mathbf{y}-\mathbf{A}\mathbf{x}\|_2+\lambda f(\mathbf{x})\}. Here, ff is a convex function aiming to promote the structure of x0\mathbf{x}_0, say 1\ell_1-norm to promote sparsity or nuclear norm to promote low-rankness. We assume that the entries of A\mathbf{A} are independent and normally distributed and make no assumptions on the noise vector z\mathbf{z}, other than it being independent of A\mathbf{A}. Under this generic setup, we derive a general, non-asymptotic and rather tight upper bound on the 2\ell_2-norm of the estimation error x^x02\|\hat{\mathbf{x}}-\mathbf{x}_0\|_2. Our bound is geometric in nature and obeys a simple formula; the roles of λ\lambda, ff and x0\mathbf{x}_0 are all captured by a single summary parameter δ(λ((f(x0)))\delta(\lambda\partial((f(\mathbf{x}_0))), termed the Gaussian squared distance to the scaled subdifferential. We connect our result to the literature and verify its validity through simulations.

View on arXiv
Comments on this paper