ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.07868
81
71
v1v2v3 (latest)

Stochastic Runge-Kutta Accelerates Langevin Monte Carlo and Beyond

19 June 2019
Xuechen Li
Denny Wu
Lester W. Mackey
Murat A. Erdogdu
ArXiv (abs)PDFHTML
Abstract

Sampling with Markov chain Monte Carlo methods often amounts to discretizing some continuous-time dynamics with numerical integration. In this paper, we establish the convergence rate of sampling algorithms obtained by discretizing smooth It\^o diffusions exhibiting fast Wasserstein-222 contraction, based on local deviation properties of the integration scheme. In particular, we study a sampling algorithm constructed by discretizing the overdamped Langevin diffusion with the method of stochastic Runge-Kutta. For strongly convex potentials that are smooth up to a certain order, its iterates converge to the target distribution in 222-Wasserstein distance in O~(dϵ−2/3)\tilde{\mathcal{O}}(d\epsilon^{-2/3})O~(dϵ−2/3) iterations. This improves upon the best-known rate for strongly log-concave sampling based on the overdamped Langevin equation using only the gradient oracle without adjustment. In addition, we extend our analysis of stochastic Runge-Kutta methods to uniformly dissipative diffusions with possibly non-convex potentials and show they achieve better rates compared to the Euler-Maruyama scheme in terms of the dependence on tolerance ϵ\epsilonϵ. Numerical studies show that these algorithms lead to better stability and lower asymptotic errors.

View on arXiv
Comments on this paper