ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1606.03077
16
30

Efficient Robust Proper Learning of Log-concave Distributions

9 June 2016
Ilias Diakonikolas
D. Kane
Alistair Stewart
ArXivPDFHTML
Abstract

We study the {\em robust proper learning} of univariate log-concave distributions (over continuous and discrete domains). Given a set of samples drawn from an unknown target distribution, we want to compute a log-concave hypothesis distribution that is as close as possible to the target, in total variation distance. In this work, we give the first computationally efficient algorithm for this learning problem. Our algorithm achieves the information-theoretically optimal sample size (up to a constant factor), runs in polynomial time, and is robust to model misspecification with nearly-optimal error guarantees. Specifically, we give an algorithm that, on input n=O(1/\eps5/2)n=O(1/\eps^{5/2})n=O(1/\eps5/2) samples from an unknown distribution fff, runs in time O~(n8/5)\widetilde{O}(n^{8/5})O(n8/5), and outputs a log-concave hypothesis hhh that (with high probability) satisfies \dtv(h,f)=O(\opt)+\eps\dtv(h, f) = O(\opt)+\eps\dtv(h,f)=O(\opt)+\eps, where \opt\opt\opt is the minimum total variation distance between fff and the class of log-concave distributions. Our approach to the robust proper learning problem is quite flexible and may be applicable to many other univariate distribution families.

View on arXiv
Comments on this paper