ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1305.3207
78
116

Efficient Density Estimation via Piecewise Polynomial Approximation

14 May 2013
Siu On Chan
Ilias Diakonikolas
Rocco A. Servedio
Xiaorui Sun
ArXivPDFHTML
Abstract

We give a highly efficient "semi-agnostic" algorithm for learning univariate probability distributions that are well approximated by piecewise polynomial density functions. Let ppp be an arbitrary distribution over an interval III which is τ\tauτ-close (in total variation distance) to an unknown probability distribution qqq that is defined by an unknown partition of III into ttt intervals and ttt unknown degree-ddd polynomials specifying qqq over each of the intervals. We give an algorithm that draws O~(t\new(d+1)/\eps2)\tilde{O}(t\new{(d+1)}/\eps^2)O~(t\new(d+1)/\eps2) samples from ppp, runs in time \poly(t,d,1/\eps)\poly(t,d,1/\eps)\poly(t,d,1/\eps), and with high probability outputs a piecewise polynomial hypothesis distribution hhh that is (O(τ)+\eps)(O(\tau)+\eps)(O(τ)+\eps)-close (in total variation distance) to ppp. This sample complexity is essentially optimal; we show that even for τ=0\tau=0τ=0, any algorithm that learns an unknown ttt-piecewise degree-ddd probability distribution over III to accuracy \eps\eps\eps must use Ω(t(d+1)\poly(1+log⁡(d+1))⋅1\eps2)\Omega({\frac {t(d+1)} {\poly(1 + \log(d+1))}} \cdot {\frac 1 {\eps^2}})Ω(\poly(1+log(d+1))t(d+1)​⋅\eps21​) samples from the distribution, regardless of its running time. Our algorithm combines tools from approximation theory, uniform convergence, linear programming, and dynamic programming. We apply this general algorithm to obtain a wide range of results for many natural problems in density estimation over both continuous and discrete domains. These include state-of-the-art results for learning mixtures of log-concave distributions; mixtures of ttt-modal distributions; mixtures of Monotone Hazard Rate distributions; mixtures of Poisson Binomial Distributions; mixtures of Gaussians; and mixtures of kkk-monotone densities. Our general technique yields computationally efficient algorithms for all these problems, in many cases with provably optimal sample complexities (up to logarithmic factors) in all parameters.

View on arXiv
Comments on this paper