Lasso and Partially-Rotated Designs

We consider the sparse linear regression model , where is the design, is a -sparse secret, and is the noise. Given input and , the goal is to estimate . In this setting, the Lasso estimate achieves prediction error , where is the restricted eigenvalue (RE) constant of with respect to . In this paper, we introduce a new family of designs -- which we call designs -- for which the RE constant with respect to the secret is bounded away from zero even when a subset of the design columns are arbitrarily correlated among themselves.As an example of such a design, suppose we start with some arbitrary , and then apply a random rotation to the columns of indexed by . Let be the smallest eigenvalue of , where is the restriction of to the columns indexed by . In this setting, our results imply that Lasso achieves prediction error with high probability. This prediction error bound is independent of the arbitrary columns of not indexed by , and is as good as if all of these columns were perfectly well-conditioned.Technically, our proof reduces to showing that matrices with a certain deterministic property -- which we call (RNO) -- lead to RE constants that are independent of a subset of the matrix columns. This property is similar but incomparable with the restricted orthogonality condition of [CT05].
View on arXiv@article{buhai2025_2505.11093, title={ Lasso and Partially-Rotated Designs }, author={ Rares-Darius Buhai }, journal={arXiv preprint arXiv:2505.11093}, year={ 2025 } }