Convergence of Langevin Monte Carlo in Chi-Square Divergence
We study sampling from a target distribution using the unadjusted Langevin Monte Carlo (LMC) algorithm when the potential satisfies a strong dissipativity condition and it is first-order smooth with Lipschitz gradient. We prove that, initialized with a Gaussian that has sufficiently small variance, steps of the LMC algorithm are sufficient to reach -neighborhood of the target in Chi-square divergence, where is the log-Sobolev constant of . Our results do not require warm-start to deal with exponential dimension dependency in Chi-square divergence at initialization. In particular, for strongly convex and first-order smooth potentials, we show that the LMC algorithm achieves the rate estimate which improves the previously known rates in this metric, under the same assumptions. Translating to other metrics, our result also recovers the best-known rate estimates in KL divergence, total variation and -Wasserstein distance in the same setup. Finally, as we rely on the log-Sobolev inequality, our framework covers a wide range of non-convex potentials that are first-order smooth and that exhibit strong convexity outside of a compact region.
View on arXiv