ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.06413
23
0

On Computationally Efficient Learning of Exponential Family Distributions

12 September 2023
Abhin Shah
Devavrat Shah
G. Wornell
ArXivPDFHTML
Abstract

We consider the classical problem of learning, with arbitrary accuracy, the natural parameters of a kkk-parameter truncated \textit{minimal} exponential family from i.i.d. samples in a computationally and statistically efficient manner. We focus on the setting where the support as well as the natural parameters are appropriately bounded. While the traditional maximum likelihood estimator for this class of exponential family is consistent, asymptotically normal, and asymptotically efficient, evaluating it is computationally hard. In this work, we propose a novel loss function and a computationally efficient estimator that is consistent as well as asymptotically normal under mild conditions. We show that, at the population level, our method can be viewed as the maximum likelihood estimation of a re-parameterized distribution belonging to the same class of exponential family. Further, we show that our estimator can be interpreted as a solution to minimizing a particular Bregman score as well as an instance of minimizing the \textit{surrogate} likelihood. We also provide finite sample guarantees to achieve an error (in ℓ2\ell_2ℓ2​-norm) of α\alphaα in the parameter estimation with sample complexity O(poly(k)/α2)O({\sf poly}(k)/\alpha^2)O(poly(k)/α2). Our method achives the order-optimal sample complexity of O(log(k)/α2)O({\sf log}(k)/\alpha^2)O(log(k)/α2) when tailored for node-wise-sparse Markov random fields. Finally, we demonstrate the performance of our estimator via numerical experiments.

View on arXiv
Comments on this paper