ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.09048
78
194
v1v2 (latest)

Convergence of Langevin MCMC in KL-divergence

25 May 2017
Xiang Cheng
Peter L. Bartlett
ArXiv (abs)PDFHTML
Abstract

Langevin diffusion is a commonly used tool for sampling from a given distribution. In this work, we establish that when the target density p∗p^*p∗ is such that log⁡p∗\log p^*logp∗ is LLL smooth and mmm strongly convex, discrete Langevin diffusion produces a distribution ppp with KL(p∣∣p∗)≤ϵKL(p||p^*)\leq \epsilonKL(p∣∣p∗)≤ϵ in O~(dϵ)\tilde{O}(\frac{d}{\epsilon})O~(ϵd​) steps, where ddd is the dimension of the sample space. We also study the convergence rate when the strong-convexity assumption is absent. By considering the Langevin diffusion as a gradient flow in the space of probability distributions, we obtain an elegant analysis that applies to the stronger property of convergence in KL-divergence and gives a conceptually simpler proof of the best-known convergence results in weaker metrics.

View on arXiv
Comments on this paper