ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.04553
45
29

Maximum Likelihood Estimation for Learning Populations of Parameters

12 February 2019
Ramya Korlakai Vinayak
Weihao Kong
Gregory Valiant
Sham Kakade
ArXiv (abs)PDFHTML
Abstract

Consider a setting with NNN independent individuals, each with an unknown parameter, pi∈[0,1]p_i \in [0, 1]pi​∈[0,1] drawn from some unknown distribution P⋆P^\starP⋆. After observing the outcomes of ttt independent Bernoulli trials, i.e., Xi∼Binomial(t,pi)X_i \sim \text{Binomial}(t, p_i)Xi​∼Binomial(t,pi​) per individual, our objective is to accurately estimate P⋆P^\starP⋆. This problem arises in numerous domains, including the social sciences, psychology, health-care, and biology, where the size of the population under study is usually large while the number of observations per individual is often limited. Our main result shows that, in the regime where t≪Nt \ll Nt≪N, the maximum likelihood estimator (MLE) is both statistically minimax optimal and efficiently computable. Precisely, for sufficiently large NNN, the MLE achieves the information theoretic optimal error bound of O(1t)\mathcal{O}(\frac{1}{t})O(t1​) for t<clog⁡Nt < c\log{N}t<clogN, with regards to the earth mover's distance (between the estimated and true distributions). More generally, in an exponentially large interval of ttt beyond clog⁡Nc \log{N}clogN, the MLE achieves the minimax error bound of O(1tlog⁡N)\mathcal{O}(\frac{1}{\sqrt{t\log N}})O(tlogN​1​). In contrast, regardless of how large NNN is, the naive "plug-in" estimator for this problem only achieves the sub-optimal error of Θ(1t)\Theta(\frac{1}{\sqrt{t}})Θ(t​1​).

View on arXiv
Comments on this paper