ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.02702
26
18

A Sharp Estimate on the Transient Time of Distributed Stochastic Gradient Descent

6 June 2019
Shi Pu
Alexander Olshevsky
I. Paschalidis
ArXivPDFHTML
Abstract

This paper is concerned with minimizing the average of nnn cost functions over a network in which agents may communicate and exchange information with each other. We consider the setting where only noisy gradient information is available. To solve the problem, we study the distributed stochastic gradient descent (DSGD) method and perform a non-asymptotic convergence analysis. For strongly convex and smooth objective functions, DSGD asymptotically achieves the optimal network independent convergence rate compared to centralized stochastic gradient descent (SGD). Our main contribution is to characterize the transient time needed for DSGD to approach the asymptotic convergence rate, which we show behaves as KT=O(n(1−ρw)2)K_T=\mathcal{O}\left(\frac{n}{(1-\rho_w)^2}\right)KT​=O((1−ρw​)2n​), where 1−ρw1-\rho_w1−ρw​ denotes the spectral gap of the mixing matrix. Moreover, we construct a "hard" optimization problem for which we show the transient time needed for DSGD to approach the asymptotic convergence rate is lower bounded by Ω(n(1−ρw)2)\Omega \left(\frac{n}{(1-\rho_w)^2} \right)Ω((1−ρw​)2n​), implying the sharpness of the obtained result. Numerical experiments demonstrate the tightness of the theoretical results.

View on arXiv
Comments on this paper