ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2008.01296
40
38
v1v2v3 (latest)

Faster Stochastic Alternating Direction Method of Multipliers for Nonconvex Optimization

4 August 2020
Feihu Huang
Songcan Chen
Heng-Chiao Huang
ArXiv (abs)PDFHTML
Abstract

In this paper, we propose a faster stochastic alternating direction method of multipliers (ADMM) for nonconvex optimization by using a new stochastic path-integrated differential estimator (SPIDER), called as SPIDER-ADMM. Moreover, we prove that the SPIDER-ADMM achieves a record-breaking incremental first-order oracle (IFO) complexity of O(n+n1/2ϵ−1)\mathcal{O}(n+n^{1/2}\epsilon^{-1})O(n+n1/2ϵ−1) for finding an ϵ\epsilonϵ-approximate stationary point, which improves the deterministic ADMM by a factor O(n1/2)\mathcal{O}(n^{1/2})O(n1/2), where nnn denotes the sample size. As one of major contribution of this paper, we provide a new theoretical analysis framework for nonconvex stochastic ADMM methods with providing the optimal IFO complexity. Based on this new analysis framework, we study the unsolved optimal IFO complexity of the existing non-convex SVRG-ADMM and SAGA-ADMM methods, and prove they have the optimal IFO complexity of O(n+n2/3ϵ−1)\mathcal{O}(n+n^{2/3}\epsilon^{-1})O(n+n2/3ϵ−1). Thus, the SPIDER-ADMM improves the existing stochastic ADMM methods by a factor of O(n1/6)\mathcal{O}(n^{1/6})O(n1/6). Moreover, we extend SPIDER-ADMM to the online setting, and propose a faster online SPIDER-ADMM. Our theoretical analysis shows that the online SPIDER-ADMM has the IFO complexity of O(ϵ−32)\mathcal{O}(\epsilon^{-\frac{3}{2}})O(ϵ−23​), which improves the existing best results by a factor of O(ϵ−12)\mathcal{O}(\epsilon^{-\frac{1}{2}})O(ϵ−21​). Finally, the experimental results on benchmark datasets validate that the proposed algorithms have faster convergence rate than the existing ADMM algorithms for nonconvex optimization.

View on arXiv
Comments on this paper