21
2
v1v2 (latest)

Adaptive minimax optimality in statistical inverse problems via SOLIT -- Sharp Optimal Lepskii-Inspired Tuning

Abstract

We consider statistical linear inverse problems in separable Hilbert spaces and filter-based reconstruction methods of the form f^α=qα(TT)TY\hat f_\alpha = q_\alpha \left(T^*T\right)T^*Y, where YY is the available data, TT the forward operator, (qα)αA\left(q_\alpha\right)_{\alpha \in \mathcal A} an ordered filter, and α>0\alpha > 0 a regularization parameter. Whenever such a method is used in practice, α\alpha has to be appropriately chosen. Typically, the aim is to find or at least approximate the best possible α\alpha in the sense that mean squared error (MSE) E[f^αf2]\mathbb E [\Vert \hat f_\alpha - f^\dagger\Vert^2] w.r.t.~the true solution ff^\dagger is minimized. In this paper, we introduce the Sharp Optimal Lepski\u{\i}-Inspired Tuning (SOLIT) method, which yields an a posteriori parameter choice rule ensuring adaptive minimax rates of convergence. It depends only on YY and the noise level σ\sigma as well as the operator TT and the filter (qα)αA\left(q_\alpha\right)_{\alpha \in \mathcal A} and does not require any problem-dependent tuning of further parameters. We prove an oracle inequality for the corresponding MSE in a general setting and derive the rates of convergence in different scenarios. By a careful analysis we show that no other a posteriori parameter choice rule can yield a better performance in terms of the order of the convergence rate of the MSE. In particular, our results reveal that the typical understanding of Lepski\u\i-type methods in inverse problems leading to a loss of a log factor is wrong. In addition, the empirical performance of SOLIT is examined in simulations.

View on arXiv
Comments on this paper