398

A Model of Double Descent for High-dimensional Binary Linear Classification

Information and Inference A Journal of the IMA (JIII), 2019
Abstract

We consider a model for logistic regression where only a subset of features of size pp is used for training a linear classifier over nn training samples. The classifier is obtained by running gradient-descent (GD) on the logistic-loss. For this model, we investigate the dependence of the generalization error on the overparameterization ratio κ=p/n\kappa=p/n. First, building on known deterministic results on convergence properties of the GD, we uncover a phase-transition phenomenon for the case of Gaussian regressors: the generalization error of GD is the same as that of the maximum-likelihood (ML) solution when κ<κ\kappa<\kappa_\star, and that of the max-margin (SVM) solution when κ>κ\kappa>\kappa_\star. Next, using the convex Gaussian min-max theorem (CGMT), we sharply characterize the performance of both the ML and SVM solutions. Combining these results, we obtain curves that explicitly characterize the generalization error of GD for varying values of κ\kappa. The numerical results validate the theoretical predictions and unveil double-descent phenomena that complement similar recent observations in linear regression settings.

View on arXiv
Comments on this paper