126

Stochastic Conjugate Gradient Algorithm with Variance Reduction

IEEE Transactions on Neural Networks and Learning Systems (IEEE TNNLS), 2017
Abstract

Conjugate gradient methods are a class of important methods for solving linear equations and nonlinear optimization. In our work, we propose a new stochastic conjugate gradient algorithm with variance reduction (CGVR) and prove its linear convergence with the Fletcher and Revves method for strongly convex and smooth functions. We experimentally demonstrate that the CGVR algorithm converges faster than its counterparts for six large-scale optimization problems that may be convex, non-convex or non-smooth, and its AUC (Area Under Curve) performance with L2L2-regularized L2L2-loss is comparable to that of LIBLINEAR but with significant improvement in computational efficiency.

View on arXiv
Comments on this paper