26
200

Global Convergence of Langevin Dynamics Based Algorithms for Nonconvex Optimization

Abstract

We present a unified framework to analyze the global convergence of Langevin dynamics based algorithms for nonconvex finite-sum optimization with nn component functions. At the core of our analysis is a direct analysis of the ergodicity of the numerical approximations to Langevin dynamics, which leads to faster convergence rates. Specifically, we show that gradient Langevin dynamics (GLD) and stochastic gradient Langevin dynamics (SGLD) converge to the almost minimizer within O~(nd/(λϵ))\tilde O\big(nd/(\lambda\epsilon) \big) and O~(d7/(λ5ϵ5))\tilde O\big(d^7/(\lambda^5\epsilon^5) \big) stochastic gradient evaluations respectively, where dd is the problem dimension, and λ\lambda is the spectral gap of the Markov chain generated by GLD. Both results improve upon the best known gradient complexity results (Raginsky et al., 2017). Furthermore, for the first time we prove the global convergence guarantee for variance reduced stochastic gradient Langevin dynamics (SVRG-LD) to the almost minimizer within O~(nd5/(λ4ϵ5/2))\tilde O\big(\sqrt{n}d^5/(\lambda^4\epsilon^{5/2})\big) stochastic gradient evaluations, which outperforms the gradient complexities of GLD and SGLD in a wide regime. Our theoretical analyses shed some light on using Langevin dynamics based algorithms for nonconvex optimization with provable guarantees.

View on arXiv
Comments on this paper