17
146

Stochastic Nested Variance Reduction for Nonconvex Optimization

Abstract

We study finite-sum nonconvex optimization problems, where the objective function is an average of nn nonconvex functions. We propose a new stochastic gradient descent algorithm based on nested variance reduction. Compared with conventional stochastic variance reduced gradient (SVRG) algorithm that uses two reference points to construct a semi-stochastic gradient with diminishing variance in each iteration, our algorithm uses K+1K+1 nested reference points to build a semi-stochastic gradient to further reduce its variance in each iteration. For smooth nonconvex functions, the proposed algorithm converges to an ϵ\epsilon-approximate first-order stationary point (i.e., F(x)2ϵ\|\nabla F(\mathbf{x})\|_2\leq \epsilon) within O~(nϵ2+ϵ3n1/2ϵ2)\tilde O(n\land \epsilon^{-2}+\epsilon^{-3}\land n^{1/2}\epsilon^{-2}) number of stochastic gradient evaluations. This improves the best known gradient complexity of SVRG O(n+n2/3ϵ2)O(n+n^{2/3}\epsilon^{-2}) and that of SCSG O(nϵ2+ϵ10/3n2/3ϵ2)O(n\land \epsilon^{-2}+\epsilon^{-10/3}\land n^{2/3}\epsilon^{-2}). For gradient dominated functions, our algorithm also achieves better gradient complexity than the state-of-the-art algorithms. Thorough experimental results on different nonconvex optimization problems back up our theory.

View on arXiv
Comments on this paper