220
v1v2v3v4v5 (latest)

Stochastic gradient-free descents

Abstract

In this paper we propose stochastic gradient-free methods and accelerated methods with momentum for solving stochastic optimization problems. All these methods rely on stochastic directions rather than stochastic gradients. We analyze the convergence behavior of these methods under the mean-variance framework, and also provide a theoretical analysis about the inclusion of momentum in stochastic settings which reveals that the momentum term we used adds a deviation of order O(1/k)\mathcal{O}(1/k) but controls the variance at the order O(1/k)\mathcal{O}(1/k) for the kkth iteration. So it is shown that, when employing a decaying stepsize αk=O(1/k)\alpha_k=\mathcal{O}(1/k), the stochastic gradient-free methods can still maintain the sublinear convergence rate O(1/k)\mathcal{O}(1/k) and the accelerated methods with momentum can achieve a convergence rate O(1/k2)\mathcal{O}(1/k^2) in probability for the strongly convex objectives with Lipschitz gradients; and all these methods converge to a solution with a zero expected gradient norm when the objective function is nonconvex, twice differentiable and bounded below.

View on arXiv
Comments on this paper