343

Consistent Learning by Composite Proximal Thresholding

Abstract

We investigate random design least-squares regression with prediction functions which are linear combination of elements of a possibly infinite-dimensional dictionary. We propose a new flexible composite regularization model, which makes it possible to apply various priors to the coefficients of the prediction function, including hard constraints. We show that the estimators obtained by minimizing the regularized empirical risk are consistent. Moreover, we design an error-tolerant composite proximal thresholding algorithm for computing the estimators. The convergence of this algorithm is established.

View on arXiv
Comments on this paper