ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.11612
61
34

Convergence of Langevin Monte Carlo in Chi-Squared and Renyi Divergence

22 July 2020
Murat A. Erdogdu
Rasa Hosseinzadeh
Matthew Shunshi Zhang
ArXivPDFHTML
Abstract

We study sampling from a target distribution ν∗=e−f\nu_* = e^{-f}ν∗​=e−f using the unadjusted Langevin Monte Carlo (LMC) algorithm when the potential fff satisfies a strong dissipativity condition and it is first-order smooth with a Lipschitz gradient. We prove that, initialized with a Gaussian random vector that has sufficiently small variance, iterating the LMC algorithm for O~(λ2dϵ−1)\widetilde{\mathcal{O}}(\lambda^2 d\epsilon^{-1})O(λ2dϵ−1) steps is sufficient to reach ϵ\epsilonϵ-neighborhood of the target in both Chi-squared and Renyi divergence, where λ\lambdaλ is the logarithmic Sobolev constant of ν∗\nu_*ν∗​. Our results do not require warm-start to deal with the exponential dimension dependency in Chi-squared divergence at initialization. In particular, for strongly convex and first-order smooth potentials, we show that the LMC algorithm achieves the rate estimate O~(dϵ−1)\widetilde{\mathcal{O}}(d\epsilon^{-1})O(dϵ−1) which improves the previously known rates in both of these metrics, under the same assumptions. Translating this rate to other metrics, our results also recover the state-of-the-art rate estimates in KL divergence, total variation and 222-Wasserstein distance in the same setup. Finally, as we rely on the logarithmic Sobolev inequality, our framework covers a range of non-convex potentials that are first-order smooth and exhibit strong convexity outside of a compact region.

View on arXiv
Comments on this paper