ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.05082
24
36

Distributed Stochastic Consensus Optimization with Momentum for Nonconvex Nonsmooth Problems

10 November 2020
Zhiguo Wang
Jiawei Zhang
Tsung-Hui Chang
Jian Li
Zhi-Quan Luo
ArXivPDFHTML
Abstract

While many distributed optimization algorithms have been proposed for solving smooth or convex problems over the networks, few of them can handle non-convex and non-smooth problems. Based on a proximal primal-dual approach, this paper presents a new (stochastic) distributed algorithm with Nesterov momentum for accelerated optimization of non-convex and non-smooth problems. Theoretically, we show that the proposed algorithm can achieve an ϵ\epsilonϵ-stationary solution under a constant step size with O(1/ϵ2)\mathcal{O}(1/\epsilon^2)O(1/ϵ2) computation complexity and O(1/ϵ)\mathcal{O}(1/\epsilon)O(1/ϵ) communication complexity. When compared to the existing gradient tracking based methods, the proposed algorithm has the same order of computation complexity but lower order of communication complexity. To the best of our knowledge, the presented result is the first stochastic algorithm with the O(1/ϵ)\mathcal{O}(1/\epsilon)O(1/ϵ) communication complexity for non-convex and non-smooth problems. Numerical experiments for a distributed non-convex regression problem and a deep neural network based classification problem are presented to illustrate the effectiveness of the proposed algorithms.

View on arXiv
Comments on this paper