Smoothed Analysis for Learning Concepts with Low Intrinsic Dimension

In traditional models of supervised learning, the goal of a learner -- given examples from an arbitrary joint distribution on -- is to output a hypothesis that is competitive (to within ) of the best fitting concept from some class. In order to escape strong hardness results for learning even simple concept classes, we introduce a smoothed-analysis framework that requires a learner to compete only with the best classifier that is robust to small random Gaussian perturbation.This subtle change allows us to give a wide array of learning results for any concept that (1) depends on a low-dimensional subspace (aka multi-index model) and (2) has a bounded Gaussian surface area. This class includes functions of halfspaces and (low-dimensional) convex sets, cases that are only known to be learnable in non-smoothed settings with respect to highly structured distributions such as Gaussians.Surprisingly, our analysis also yields new results for traditional non-smoothed frameworks such as learning with margin. In particular, we obtain the first algorithm for agnostically learning intersections of -halfspaces in time where is the margin parameter. Before our work, the best-known runtime was exponential in (Arriaga and Vempala, 1999).
View on arXiv@article{chandrasekaran2025_2407.00966, title={ Smoothed Analysis for Learning Concepts with Low Intrinsic Dimension }, author={ Gautam Chandrasekaran and Adam Klivans and Vasilis Kontonis and Raghu Meka and Konstantinos Stavropoulos }, journal={arXiv preprint arXiv:2407.00966}, year={ 2025 } }