ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.06491
53
5
v1v2 (latest)

On the Computational Complexity of Metropolis-Adjusted Langevin Algorithms for Bayesian Posterior Sampling

13 June 2022
Rong Tang
Yun Yang
ArXiv (abs)PDFHTML
Abstract

In this paper, we study the computational complexity of sampling from a Bayesian posterior (or pseudo-posterior) using the Metropolis-adjusted Langevin algorithm (MALA). MALA first applies a discrete-time Langevin SDE to propose a new state, and then adjusts the proposed state using Metropolis-Hastings rejection. Most existing theoretical analysis of MALA relies on the smoothness and strongly log-concavity properties of the target distribution, which unfortunately is often unsatisfied in practical Bayesian problems. Our analysis relies on the statistical large sample theory, which restricts the deviation of the Bayesian posterior from being smooth and log-concave in a very specific manner. Specifically, we establish the optimal parameter dimension dependence of d1/3d^{1/3}d1/3 in the non-asymptotic mixing time upper bound for MALA after the burn-in period without assuming the smoothness and log-concavity of the target posterior density, where MALA is slightly modified by replacing the gradient with any subgradient if non-differentiable. In comparison, the well-known scaling limit for the classical Metropolis random walk (MRW) suggests a linear ddd dimension dependence in its mixing time. Thus, our results formally verify the conventional wisdom that MALA, as a first-order method using gradient information, is more efficient than MRW as a zeroth-order method only using function value information in the context of Bayesian computation.

View on arXiv
Comments on this paper