ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2212.03783
19
0

Tight bounds for maximum ℓ1\ell_1ℓ1​-margin classifiers

7 December 2022
Stefan Stojanovic
Konstantin Donhauser
Fanny Yang
ArXivPDFHTML
Abstract

Popular iterative algorithms such as boosting methods and coordinate descent on linear models converge to the maximum ℓ1\ell_1ℓ1​-margin classifier, a.k.a. sparse hard-margin SVM, in high dimensional regimes where the data is linearly separable. Previous works consistently show that many estimators relying on the ℓ1\ell_1ℓ1​-norm achieve improved statistical rates for hard sparse ground truths. We show that surprisingly, this adaptivity does not apply to the maximum ℓ1\ell_1ℓ1​-margin classifier for a standard discriminative setting. In particular, for the noiseless setting, we prove tight upper and lower bounds for the prediction error that match existing rates of order ∥w∗∥12/3n1/3\frac{\|w^*\|_1^{2/3}}{n^{1/3}}n1/3∥w∗∥12/3​​ for general ground truths. To complete the picture, we show that when interpolating noisy observations, the error vanishes at a rate of order 1log⁡(d/n)\frac{1}{\sqrt{\log(d/n)}}log(d/n)​1​. We are therefore first to show benign overfitting for the maximum ℓ1\ell_1ℓ1​-margin classifier.

View on arXiv
Comments on this paper