ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2307.08438
26
2

Near-Optimal Bounds for Learning Gaussian Halfspaces with Random Classification Noise

13 July 2023
Ilias Diakonikolas
Jelena Diakonikolas
D. Kane
Puqian Wang
Nikos Zarifis
ArXivPDFHTML
Abstract

We study the problem of learning general (i.e., not necessarily homogeneous) halfspaces with Random Classification Noise under the Gaussian distribution. We establish nearly-matching algorithmic and Statistical Query (SQ) lower bound results revealing a surprising information-computation gap for this basic problem. Specifically, the sample complexity of this learning problem is Θ~(d/ϵ)\widetilde{\Theta}(d/\epsilon)Θ(d/ϵ), where ddd is the dimension and ϵ\epsilonϵ is the excess error. Our positive result is a computationally efficient learning algorithm with sample complexity O~(d/ϵ+d/(max⁡{p,ϵ})2)\tilde{O}(d/\epsilon + d/(\max\{p, \epsilon\})^2)O~(d/ϵ+d/(max{p,ϵ})2), where ppp quantifies the bias of the target halfspace. On the lower bound side, we show that any efficient SQ algorithm (or low-degree test) for the problem requires sample complexity at least Ω(d1/2/(max⁡{p,ϵ})2)\Omega(d^{1/2}/(\max\{p, \epsilon\})^2)Ω(d1/2/(max{p,ϵ})2). Our lower bound suggests that this quadratic dependence on 1/ϵ1/\epsilon1/ϵ is inherent for efficient algorithms.

View on arXiv
Comments on this paper