770

Subset Selection with Shrinkage: Sparse Linear Modeling when the SNR is low

Operational Research (OR), 2017
Abstract

We study a seemingly unexpected and relatively less understood overfitting aspect of a fundamental tool in sparse linear modeling - best-subsets selection, which minimizes the residual sum of squares subject to a constraint on the number of nonzero coefficients. While the best-subsets selection procedure is often perceived as the "gold standard" in sparse learning when the signal to noise ratio (SNR) is high, its predictive performance deteriorates when the SNR is low. In particular, it is outperformed by continuous shrinkage methods, such as ridge regression and the Lasso. We investigate the behavior of best-subsets selection in the low-SNR regimes and propose an alternative approach based on a regularized version of the best-subsets criterion. Our proposed estimators (a) mitigate, to a large extent, the poor predictive performance of best-subset selection in the low-SNR regimes; and (b) perform favorably, while generally delivering substantially sparser models, relative to the best predictive models available via ridge regression and the Lasso. We conduct an extensive theoretical analysis of the predictive properties of the proposed approach and provide justification for its superior predictive performance relative to best-subsets selection when the SNR is low. Our estimators can be expressed as solutions to mixed integer second order conic optimization problems and, hence, are amenable to modern computational tools from mathematical optimization.

View on arXiv
Comments on this paper