ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1710.00095
102
297
v1v2v3v4 (latest)

User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient

29 September 2017
A. Dalalyan
Avetik G. Karagulyan
ArXiv (abs)PDFHTML
Abstract

In this paper, we revisit the recently established theoretical guarantees for the convergence of the Langevin Monte Carlo algorithm of sampling from a smooth and (strongly) log-concave density. We improve, in terms of constants, the existing results when the accuracy of sampling is measured in the Wasserstein distance and provide further insights on relations between, on the one hand, the Langevin Monte Carlo for sampling and, on the other hand, the gradient descent for optimization. More importantly, we establish non-asymptotic guarantees for the accuracy of a version of the Langevin Monte Carlo algorithm that is based on inaccurate evaluations of the gradient. Finally, we propose a variable-step version of the Langevin Monte Carlo algorithm that has two advantages. First, its step-sizes are independent of the target accuracy and, second, its rate provides a logarithmic improvement over the constant-step Langevin Monte Carlo algorithm.

View on arXiv
Comments on this paper