ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.02490
29
0

Stochastic variance-reduced Gaussian variational inference on the Bures-Wasserstein manifold

3 October 2024
Hoang Phuc Hau Luu
Hanlin Yu
Bernardo Williams
Marcelo Hartmann
Arto Klami
    DRL
ArXivPDFHTML
Abstract

Optimization in the Bures-Wasserstein space has been gaining popularity in the machine learning community since it draws connections between variational inference and Wasserstein gradient flows. The variational inference objective function of Kullback-Leibler divergence can be written as the sum of the negative entropy and the potential energy, making forward-backward Euler the method of choice. Notably, the backward step admits a closed-form solution in this case, facilitating the practicality of the scheme. However, the forward step is not exact since the Bures-Wasserstein gradient of the potential energy involves "intractable" expectations. Recent approaches propose using the Monte Carlo method -- in practice a single-sample estimator -- to approximate these terms, resulting in high variance and poor performance. We propose a novel variance-reduced estimator based on the principle of control variates. We theoretically show that this estimator has a smaller variance than the Monte-Carlo estimator in scenarios of interest. We also prove that variance reduction helps improve the optimization bounds of the current analysis. We demonstrate that the proposed estimator gains order-of-magnitude improvements over the previous Bures-Wasserstein methods.

View on arXiv
@article{luu2025_2410.02490,
  title={ Stochastic variance-reduced Gaussian variational inference on the Bures-Wasserstein manifold },
  author={ Hoang Phuc Hau Luu and Hanlin Yu and Bernardo Williams and Marcelo Hartmann and Arto Klami },
  journal={arXiv preprint arXiv:2410.02490},
  year={ 2025 }
}
Comments on this paper