Nonasymptotic estimates for Stochastic Gradient Langevin Dynamics under
local conditions in nonconvex optimization
Applied Mathematics and Optimization (AMO), 2019
Abstract
Within the context of empirical risk minimization, see Raginsky, Rakhlin, and Telgarsky (2017), we are concerned with a non-asymptotic analysis of sampling algorithms used in optimization. In particular, we obtain non-asymptotic error bounds for a popular class of algorithms called Stochastic Gradient Langevin Dynamics (SGLD). These results are derived in appropriate Wasserstein distances in the absence of the log-concavity of the target distribution. More precisely, the local Lipschitzness of the stochastic gradient is assumed, and furthermore, the dissipativity and convexity at infinity condition are relaxed by removing the uniform dependence in .
View on arXivComments on this paper
