929

Surprises in High-Dimensional Ridgeless Least Squares Interpolation

Annals of Statistics (Ann. Stat.), 2019
Abstract

Interpolators -- estimators that achieve zero training error -- have attracted growing attention in machine learning, mainly because state-of-the art neural networks appear to be models of this type. In this paper, we study minimum 2\ell_2-norm interpolation in high-dimensional linear regression. Motivated by the connection with overparametrized neural networks, we consider the case of random features. We study two distinct models for the features' distribution: a linear model in which the feature vectors xiRpx_i\in{\mathbb R}^p are obtained by applying a linear transform to vectors of i.i.d. entries, xi=Σ1/2zix_i = \Sigma^{1/2}z_i (with ziRpz_i\in{\mathbb R}^p); a nonlinear model, in which the features are obtained by passing the input through a random one-layer neural network xi=φ(Wzi)x_i = \varphi(Wz_i) (with ziRdz_i\in{\mathbb R}^d, and φ\varphi an activation function acting independently on the coordinates of WziWz_i). We recover -- in a precise quantitative way -- several phenomena that have been observed in large scale neural networks and kernel machines, including the `double descent' behavior of the generalization error and the potential benefit of overparametrization.

View on arXiv
Comments on this paper