Covariance Assisted Screening and Estimation
Consider a linear regression model, where the regression coefficient vector is unknown and it is of interest to separate its nonzero coordinates from the zero ones (i.e., variable selection). Motivated by examples in long-memory time series and change-point problem, we are primarily interested in the case where the Gram matrix is non-sparse but it is sparsifiable by a finite order linear filter. We focus on the regime where signals are both rare and weak so that successful variable selection is very challenging but is still possible. We approach the problem by a new procedure called the Covariance Assisted Screening and Estimation (CASE). CASE first uses a linear filtering to reduce the original setting to a new regression model where the corresponding Gram (covariance) matrix is sparse. The new covariance matrix induces a sparse graph, which guides us to screen variables without visiting all the submodels. By interacting with the signal sparsity, the graph enables us to decompose the original problem into many separable small-size subproblems (if only we know where they are). Linear filtering also induces a so-called problem of information leakage, which can be overcome by the newly introduced patching technique. Together, these give rise to CASE, which is a two-stage Screen and Clean procedure. For any procedure for variable selection, we measure the performance by the minimax Hamming distance between the sign vectors of the true and estimated coefficient vector. We show that in a broad class of situations where the Gram matrix is non-sparse but sparsifiable, CASE achieves the optimal rate of convergence. The results are successfully applied to a long-memory time series model and a change-point model.
View on arXiv