333

Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization

SIAM Journal on Optimization (SIAM J. Optim.), 2016
Abstract

In this paper we study stochastic quasi-Newton methods for nonconvex stochastic optimization, where we assume that noisy information about the gradients of the objective function is available via a stochastic first-order oracle (SFO). We propose a general framework for such methods, and prove the almost sure convergence of them to stationary points and analyze their worst-case iteration complexity. When a randomly chosen iterate is returned as the output of such an algorithm, we prove that in the worst-case, the SFO-calls complexity is O(ϵ2)O(\epsilon^{-2}) to ensure that the expectation of the squared norm of the gradient is smaller than the given accuracy tolerance ϵ\epsilon. We also propose a specific algorithm, namely a stochastic damped L-BFGS method, that falls under the proposed framework. Numerical results on a nonconvex classification problem are reported for both synthetic and real data, that demonstrate the promising potential of the proposed method.

View on arXiv
Comments on this paper