ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.01870
41
1

Understanding Stochastic Natural Gradient Variational Inference

4 June 2024
Kaiwen Wu
Jacob R. Gardner
    BDL
ArXivPDFHTML
Abstract

Stochastic natural gradient variational inference (NGVI) is a popular posterior inference method with applications in various probabilistic models. Despite its wide usage, little is known about the non-asymptotic convergence rate in the \emph{stochastic} setting. We aim to lessen this gap and provide a better understanding. For conjugate likelihoods, we prove the first O(1T)\mathcal{O}(\frac{1}{T})O(T1​) non-asymptotic convergence rate of stochastic NGVI. The complexity is no worse than stochastic gradient descent (\aka black-box variational inference) and the rate likely has better constant dependency that leads to faster convergence in practice. For non-conjugate likelihoods, we show that stochastic NGVI with the canonical parameterization implicitly optimizes a non-convex objective. Thus, a global convergence rate of O(1T)\mathcal{O}(\frac{1}{T})O(T1​) is unlikely without some significant new understanding of optimizing the ELBO using natural gradients.

View on arXiv
Comments on this paper