ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.00291
43
8
v1v2v3 (latest)

Oracle lower bounds for stochastic gradient sampling algorithms

1 February 2020
Niladri S. Chatterji
Peter L. Bartlett
Philip M. Long
ArXiv (abs)PDFHTML
Abstract

We consider the problem of sampling from a strongly log-concave density in Rd\mathbb{R}^{d}Rd, and prove an information theoretic lower bound on the number of stochastic gradient queries of the log density needed. Several popular sampling algorithms (including many Markov chain Monte Carlo methods) operate by using stochastic gradients of the log density to generate a sample; our results establish an information theoretic limit for all these algorithms. We show that for every algorithm, there exists a well-conditioned strongly log-concave target density for which the distribution of points generated by the algorithm would be at least ε\varepsilonε away from the target in total variation distance if the number of gradient queries is less than Ω(σ2d/ε2)\Omega(\sigma^2 d/\varepsilon^2)Ω(σ2d/ε2), where σ2d\sigma^2 dσ2d is the variance of the stochastic gradient. Our lower bound follows by combining the ideas of Le Cam deficiency routinely used in the comparison of statistical experiments along with standard information theoretic tools used in lower bounding Bayes risk functions. To the best of our knowledge our results provide the first nontrivial dimension-dependent lower bound for this problem.

View on arXiv
Comments on this paper