ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.11238
20
0

Reliable Learning of Halfspaces under Gaussian Marginals

18 November 2024
Ilias Diakonikolas
Lisheng Ren
Nikos Zarifis
ArXivPDFHTML
Abstract

We study the problem of PAC learning halfspaces in the reliable agnostic model of Kalai et al. (2012). The reliable PAC model captures learning scenarios where one type of error is costlier than the others. Our main positive result is a new algorithm for reliable learning of Gaussian halfspaces on Rd\mathbb{R}^dRd with sample and computational complexity d^{O(\log (\min\{1/\alpha, 1/\epsilon\}))}\min (2^{\log(1/\epsilon)^{O(\log (1/\alpha))}},2^{\mathrm{poly}(1/\epsilon)})\;, where ϵ\epsilonϵ is the excess error and α\alphaα is the bias of the optimal halfspace. We complement our upper bound with a Statistical Query lower bound suggesting that the dΩ(log⁡(1/α))d^{\Omega(\log (1/\alpha))}dΩ(log(1/α)) dependence is best possible. Conceptually, our results imply a strong computational separation between reliable agnostic learning and standard agnostic learning of halfspaces in the Gaussian setting.

View on arXiv
Comments on this paper