ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.01544
51
16
v1v2v3 (latest)

The generalization error of max-margin linear classifiers: Benign overfitting and high dimensional asymptotics in the overparametrized regime

5 November 2019
Andrea Montanari
Feng Ruan
Youngtak Sohn
Jun Yan
ArXiv (abs)PDFHTML
Abstract

Modern machine learning classifiers often exhibit vanishing classification error on the training set. They achieve this by learning nonlinear representations of the inputs that maps the data into linearly separable classes. Motivated by these phenomena, we revisit high-dimensional maximum margin classification for linearly separable data. We consider a stylized setting in which data (yi,xi)(y_i,{\boldsymbol x}_i)(yi​,xi​), i≤ni\le ni≤n are i.i.d. with xi∼N(0,Σ){\boldsymbol x}_i\sim\mathsf{N}({\boldsymbol 0},{\boldsymbol \Sigma})xi​∼N(0,Σ) a ppp-dimensional Gaussian feature vector, and yi∈{+1,−1}y_i \in\{+1,-1\}yi​∈{+1,−1} a label whose distribution depends on a linear combination of the covariates ⟨θ∗,xi⟩\langle {\boldsymbol \theta}_*,{\boldsymbol x}_i \rangle⟨θ∗​,xi​⟩. While the Gaussian model might appear extremely simplistic, universality arguments can be used to show that the results derived in this setting also apply to the output of certain nonlinear featurization maps. We consider the proportional asymptotics n,p→∞n,p\to\inftyn,p→∞ with p/n→ψp/n\to \psip/n→ψ, and derive exact expressions for the limiting generalization error. We use this theory to derive two results of independent interest: (i)(i)(i) Sufficient conditions on (Σ,θ∗)({\boldsymbol \Sigma},{\boldsymbol \theta}_*)(Σ,θ∗​) for `benign overfitting' that parallel previously derived conditions in the case of linear regression; (ii)(ii)(ii) An asymptotically exact expression for the generalization error when max-margin classification is used in conjunction with feature vectors produced by random one-layer neural networks.

View on arXiv
Comments on this paper