35
11

A sharp uniform-in-time error estimate for Stochastic Gradient Langevin Dynamics

Abstract

We establish a sharp uniform-in-time error estimate for the Stochastic Gradient Langevin Dynamics (SGLD), which is a widely-used sampling algorithm. Under mild assumptions, we obtain a uniform-in-time O(η2)O(\eta^2) bound for the KL-divergence between the SGLD iteration and the Langevin diffusion, where η\eta is the step size (or learning rate). Our analysis is also valid for varying step sizes. Consequently, we are able to derive an O(η)O(\eta) bound for the distance between the invariant measures of the SGLD iteration and the Langevin diffusion, in terms of Wasserstein or total variation distances. Our result can be viewed as a significant improvement compared with existing analysis for SGLD in related literature.

View on arXiv
@article{li2025_2207.09304,
  title={ A sharp uniform-in-time error estimate for Stochastic Gradient Langevin Dynamics },
  author={ Lei Li and Yuliang Wang },
  journal={arXiv preprint arXiv:2207.09304},
  year={ 2025 }
}
Comments on this paper