Convergence of Unadjusted Langevin in High Dimensions: Delocalization of Bias
The unadjusted Langevin algorithm is commonly used to sample probability distributions in extremely high-dimensional settings. However, existing analyses of the algorithm for strongly log-concave distributions suggest that, as the dimension of the problem increases, the number of iterations required to ensure convergence within a desired error in the metric scales in proportion to or . In this paper, we argue that, despite this poor scaling of the error for the full set of variables, the behavior for a small number of variables can be significantly better: a number of iterations proportional to , up to logarithmic terms in , often suffices for the algorithm to converge to within a desired error for all -marginals. We refer to this effect as delocalization of bias. We show that the delocalization effect does not hold universally and prove its validity for Gaussian distributions and strongly log-concave distributions with certain sparse interactions. Our analysis relies on a novel metric to measure convergence. A key technical challenge we address is the lack of a one-step contraction property in this metric. Finally, we use asymptotic arguments to explore potential generalizations of the delocalization effect beyond the Gaussian and sparse interactions setting.
View on arXiv