232

Eigenvalue Condition and model selection consistency of lasso

Abstract

Lasso is a popular method for sparse linear regression, especially for problems in which p>np > n. But when pnp \gg n, existing results from literature about the model selection consistency of lasso always require special and strong conditions, including the information from unknown true coefficients. An important question is: If the lasso solution can select the true variables without such strict condition? In this paper, we investigate a new train of thought to lead the model selection consistency of lasso. One important but more standard and much weaker condition, Eigenvalue Condition, is proposed. We can prove that the probability of lasso selecting wrong variables can decays at an exponential rate in ultra-high dimensional settings without other restrains except Eigenvalue Condition. Since penalized least squares have similar framework of solution. This technical tool can be extended to other methods which have similar structure. In the different dimensional settings, we show the different performance of lasso under different assumptions of noise terms. Results from simulations are carried out to demonstrate our results.

View on arXiv
Comments on this paper