ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.06036
15
17

Parallel Restarted SPIDER -- Communication Efficient Distributed Nonconvex Optimization with Optimal Computation Complexity

12 December 2019
Pranay Sharma
Swatantra Kafle
Prashant Khanduri
Saikiran Bulusu
K. Rajawat
P. Varshney
    FedML
ArXivPDFHTML
Abstract

In this paper, we propose a distributed algorithm for stochastic smooth, non-convex optimization. We assume a worker-server architecture where NNN nodes, each having nnn (potentially infinite) number of samples, collaborate with the help of a central server to perform the optimization task. The global objective is to minimize the average of local cost functions available at individual nodes. The proposed approach is a non-trivial extension of the popular parallel-restarted SGD algorithm, incorporating the optimal variance-reduction based SPIDER gradient estimator into it. We prove convergence of our algorithm to a first-order stationary solution. The proposed approach achieves the best known communication complexity O(ϵ−1)O(\epsilon^{-1})O(ϵ−1) along with the optimal computation complexity. For finite-sum problems (finite nnn), we achieve the optimal computation (IFO) complexity O(Nnϵ−1)O(\sqrt{Nn}\epsilon^{-1})O(Nn​ϵ−1). For online problems (nnn unknown or infinite), we achieve the optimal IFO complexity O(ϵ−3/2)O(\epsilon^{-3/2})O(ϵ−3/2). In both the cases, we maintain the linear speedup achieved by existing methods. This is a massive improvement over the O(ϵ−2)O(\epsilon^{-2})O(ϵ−2) IFO complexity of the existing approaches. Additionally, our algorithm is general enough to allow non-identical distributions of data across workers, as in the recently proposed federated learning paradigm.

View on arXiv
Comments on this paper