ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.11335
71
20

Nearly Tight Bounds for Robust Proper Learning of Halfspaces with a Margin

29 August 2019
Ilias Diakonikolas
D. Kane
Pasin Manurangsi
ArXiv (abs)PDFHTML
Abstract

We study the problem of {\em properly} learning large margin halfspaces in the agnostic PAC model. In more detail, we study the complexity of properly learning ddd-dimensional halfspaces on the unit ball within misclassification error α⋅OPTγ+ϵ\alpha \cdot \mathrm{OPT}_{\gamma} + \epsilonα⋅OPTγ​+ϵ, where OPTγ\mathrm{OPT}_{\gamma}OPTγ​ is the optimal γ\gammaγ-margin error rate and α≥1\alpha \geq 1α≥1 is the approximation ratio. We give learning algorithms and computational hardness results for this problem, for all values of the approximation ratio α≥1\alpha \geq 1α≥1, that are nearly-matching for a range of parameters. Specifically, for the natural setting that α\alphaα is any constant bigger than one, we provide an essentially tight complexity characterization. On the positive side, we give an α=1.01\alpha = 1.01α=1.01-approximate proper learner that uses O(1/(ϵ2γ2))O(1/(\epsilon^2\gamma^2))O(1/(ϵ2γ2)) samples (which is optimal) and runs in time poly(d/ϵ)⋅2O~(1/γ2)\mathrm{poly}(d/\epsilon) \cdot 2^{\tilde{O}(1/\gamma^2)}poly(d/ϵ)⋅2O~(1/γ2). On the negative side, we show that {\em any} constant factor approximate proper learner has runtime poly(d/ϵ)⋅2(1/γ)2−o(1)\mathrm{poly}(d/\epsilon) \cdot 2^{(1/\gamma)^{2-o(1)}}poly(d/ϵ)⋅2(1/γ)2−o(1), assuming the Exponential Time Hypothesis.

View on arXiv
Comments on this paper