Thresholded Lasso for high dimensional variable selection and statistical estimation

Given noisy samples with dimensions, where , we show that the multi-step thresholding procedure based on the Lasso -- we call it the {\it Thresholded Lasso}, can accurately estimate a sparse vector in a linear model , where is a design matrix normalized to have column norm , and . We show that under the restricted eigenvalue (RE) condition (Bickel-Ritov-Tsybakov 09), it is possible to achieve the loss within a logarithmic factor of the ideal mean square error one would achieve with an {\em oracle} while selecting a sufficiently sparse model -- hence achieving {\it sparse oracle inequalities}; the oracle would supply perfect information about which coordinates are non-zero and which are above the noise level. In some sense, the Thresholded Lasso recovers the choices that would have been made by the penalized least squares estimators, in that it selects a sufficiently sparse model without sacrificing the accuracy in estimating and in predicting . We also show for the Gauss-Dantzig selector (Cand\`{e}s-Tao 07), if obeys a uniform uncertainty principle and if the true parameter is sufficiently sparse, one will achieve the sparse oracle inequalities as above, while allowing at most irrelevant variables in the model in the worst case, where is the smallest integer such that for , . Our simulation results on the Thresholded Lasso match our theoretical analysis excellently.
View on arXiv