Compressed Sparse Linear Regression

High-dimensional sparse linear regression is a basic problem in machine learning and statistics. Consider a linear model , where is the vector of observations, is the covariate matrix with th row representing the covariates for the th observation, and is an unknown noise vector. In many applications, the linear regression model is high-dimensional in nature, meaning that the number of observations may be substantially smaller than the number of covariates . In these cases, it is common to assume that is sparse, and the goal in sparse linear regression is to estimate this sparse , given . In this paper, we study a variant of the traditional sparse linear regression problem where each of the covariate vectors in are individually projected by a random linear transformation to with . Such transformations are commonly applied in practice for computational savings in resources such as storage space, transmission bandwidth, and processing time. Our main result shows that one can estimate with a low -error, even with access to only these projected covariate vectors, under some mild assumptions on the problem instance. Our approach is based on solving a variant of the popular Lasso optimization problem. While the conditions (such as the restricted eigenvalue condition on ) for success of a Lasso formulation in estimating are well-understood, we investigate conditions under which this variant of Lasso estimates . The main technical ingredient of our result, a bound on the restricted eigenvalue on certain projections of a deterministic matrix satisfying a stable rank condition, could be of interest beyond sparse regression.
View on arXiv