94
6

1\ell^1 penalty for ill-posed inverse problems

Abstract

We tackle the problem of recovering an unknown signal observed in an ill-posed inverse problem framework. More precisely, we study a procedure commonly used in numerical analysis or image deblurring: minimizing an empirical loss function balanced by an l1l^1 penalty, acting as a sparsity constraint. We prove that, by choosing a proper loss function, this estimation technique enables to build an adaptive estimator, in the sense that it converges at the optimal rate of convergence without prior knowledge of the regularity of the true solution

View on arXiv
Comments on this paper