261
v1v2 (latest)

Convergence of Unadjusted Langevin in High Dimensions: Delocalization of Bias

Main:13 Pages
1 Figures
Bibliography:3 Pages
Appendix:13 Pages
Abstract

The unadjusted Langevin algorithm is commonly used to sample probability distributions in extremely high-dimensional settings. However, existing analyses of the algorithm for strongly log-concave distributions suggest that, as the dimension dd of the problem increases, the number of iterations required to ensure convergence within a desired error in the W2W_2 metric scales in proportion to dd or d\sqrt{d}. In this paper, we argue that, despite this poor scaling of the W2W_2 error for the full set of variables, the behavior for a small number of variables can be significantly better: a number of iterations proportional to KK, up to logarithmic terms in dd, often suffices for the algorithm to converge to within a desired W2W_2 error for all KK-marginals. We refer to this effect as delocalization of bias. We show that the delocalization effect does not hold universally and prove its validity for Gaussian distributions and strongly log-concave distributions with certain sparse interactions. Our analysis relies on a novel W2,W_{2,\ell^\infty} metric to measure convergence. A key technical challenge we address is the lack of a one-step contraction property in this metric. Finally, we use asymptotic arguments to explore potential generalizations of the delocalization effect beyond the Gaussian and sparse interactions setting.

View on arXiv
Comments on this paper