ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1107.2702
61
120

Learning Poisson Binomial Distributions

13 July 2011
C. Daskalakis
Ilias Diakonikolas
Rocco A. Servedio
    SSL
ArXivPDFHTML
Abstract

We consider a basic problem in unsupervised learning: learning an unknown \emph{Poisson Binomial Distribution}. A Poisson Binomial Distribution (PBD) over {0,1,…,n}\{0,1,\dots,n\}{0,1,…,n} is the distribution of a sum of nnn independent Bernoulli random variables which may have arbitrary, potentially non-equal, expectations. These distributions were first studied by S. Poisson in 1837 \cite{Poisson:37} and are a natural nnn-parameter generalization of the familiar Binomial Distribution. Surprisingly, prior to our work this basic learning problem was poorly understood, and known results for it were far from optimal. We essentially settle the complexity of the learning problem for this basic class of distributions. As our first main result we give a highly efficient algorithm which learns to \eps\eps\eps-accuracy (with respect to the total variation distance) using O~(1/\eps3)\tilde{O}(1/\eps^3)O~(1/\eps3) samples \emph{independent of nnn}. The running time of the algorithm is \emph{quasilinear} in the size of its input data, i.e., O~(log⁡(n)/\eps3)\tilde{O}(\log(n)/\eps^3)O~(log(n)/\eps3) bit-operations. (Observe that each draw from the distribution is a log⁡(n)\log(n)log(n)-bit string.) Our second main result is a {\em proper} learning algorithm that learns to \eps\eps\eps-accuracy using O~(1/\eps2)\tilde{O}(1/\eps^2)O~(1/\eps2) samples, and runs in time (1/\eps)\poly(log⁡(1/\eps))⋅log⁡n(1/\eps)^{\poly (\log (1/\eps))} \cdot \log n(1/\eps)\poly(log(1/\eps))⋅logn. This is nearly optimal, since any algorithm {for this problem} must use Ω(1/\eps2)\Omega(1/\eps^2)Ω(1/\eps2) samples. We also give positive and negative results for some extensions of this learning problem to weighted sums of independent Bernoulli random variables.

View on arXiv
Comments on this paper