78
21

Mini-Batch Stochastic ADMMs for Nonconvex Nonsmooth Optimization

Abstract

In the paper, we study the mini-batch stochastic ADMMs (alternating direction method of multipliers) for the nonconvex nonsmooth optimization. We prove that, given an appropriate mini-batch size, the mini-batch stochastic ADMM without variance reduction (VR) technique is convergent and reaches the convergence rate of O(1/T)O(1/T) to obtain a stationary point of the nonconvex optimization, where TT denotes the number of iterations. Moreover, we extend the mini-batch stochastic gradient method to both the nonconvex SVRG-ADMM and SAGA-ADMM in our initial paper \citep{huang2016stochastic}, and also prove that these mini-batch stochastic ADMMs reach the convergence rate of O(1/T)O(1/T) without the condition on the mini-batch size. In particular, we provide a specific parameter selection for step size η\eta of stochastic gradients and penalization parameter ρ\rho of the augmented Lagrangian function. Finally, some experimental results demonstrate the effectiveness of our algorithms.

View on arXiv
Comments on this paper