264

Consistent selection via the Lasso for high dimensional approximating regression models

Abstract

In this article we investigate consistency of selection in regression models via the popular Lasso method. Here we depart from the traditional linear regression assumption and consider approximations of the regression function ff with elements of a given dictionary of MM functions. The target for consistency is the index set of those functions from this dictionary that realize the most parsimonious approximation to ff among all linear combinations belonging to an L2L_2 ball centered at ff and of radius rn,M2r_{n,M}^2. In this framework we show that a consistent estimate of this index set can be derived via 1\ell_1 penalized least squares, with a data dependent penalty and with tuning sequence rn,M>log(Mn)/nr_{n,M}>\sqrt{\log(Mn)/n}, where nn is the sample size. Our results hold for any 1Mnγ1\leq M\leq n^{\gamma}, for any γ>0\gamma>0.

View on arXiv
Comments on this paper