458

Stochastic Alternating Direction Method of Multipliers with Variance Reduction for Nonconvex Optimization

Abstract

In this work, we study the stochastic alternating direction method of multipliers (ADMM) for optimizing nonconvex problems, and propose two classes of nonconvex stochastic ADMM with variance reduction. The first class is the nonconvex stochastic variance reduced gradient ADMM (SVRG-ADMM), which uses a multi-stage strategy to progressively reduce the variance of stochas- tic gradients. The second is the nonconvex stochastic average gradient ADMM (SAG-ADMM), which additionally uses the old gradients estimated in the previous iteration to reduce the vari- ance of stochastic gradients. Theoretically, we analyze the convergence of the SVRG-ADMM and SAG-ADMM, and prove that they enjoy the iteration complexity bound of O(1/e) to reach an e-stationary solution. In addition, we prove that the simple stochastic ADMM (S-ADMM), in which the variance of the stochastic gradients is free, is divergent under some conditions. Finally, the experimental results on some real datasets demonstrate our theoretical results. To the best of our knowledge, this is the first study of convergence and iteration complexity of the stochastic ADMM for the noncovex problems.

View on arXiv
Comments on this paper