ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2207.14266
13
22

Cryptographic Hardness of Learning Halfspaces with Massart Noise

28 July 2022
Ilias Diakonikolas
D. Kane
Pasin Manurangsi
Lisheng Ren
ArXivPDFHTML
Abstract

We study the complexity of PAC learning halfspaces in the presence of Massart noise. In this problem, we are given i.i.d. labeled examples (x,y)∈RN×{±1}(\mathbf{x}, y) \in \mathbb{R}^N \times \{ \pm 1\}(x,y)∈RN×{±1}, where the distribution of x\mathbf{x}x is arbitrary and the label yyy is a Massart corruption of f(x)f(\mathbf{x})f(x), for an unknown halfspace f:RN→{±1}f: \mathbb{R}^N \to \{ \pm 1\}f:RN→{±1}, with flipping probability η(x)≤η<1/2\eta(\mathbf{x}) \leq \eta < 1/2η(x)≤η<1/2. The goal of the learner is to compute a hypothesis with small 0-1 error. Our main result is the first computational hardness result for this learning problem. Specifically, assuming the (widely believed) subexponential-time hardness of the Learning with Errors (LWE) problem, we show that no polynomial-time Massart halfspace learner can achieve error better than Ω(η)\Omega(\eta)Ω(η), even if the optimal 0-1 error is small, namely OPT=2−log⁡c(N)\mathrm{OPT} = 2^{-\log^{c} (N)}OPT=2−logc(N) for any universal constant c∈(0,1)c \in (0, 1)c∈(0,1). Prior work had provided qualitatively similar evidence of hardness in the Statistical Query model. Our computational hardness result essentially resolves the polynomial PAC learnability of Massart halfspaces, by showing that known efficient learning algorithms for the problem are nearly best possible.

View on arXiv
Comments on this paper