ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.00441
  4. Cited By
DaSGD: Squeezing SGD Parallelization Performance in Distributed Training
  Using Delayed Averaging

DaSGD: Squeezing SGD Parallelization Performance in Distributed Training Using Delayed Averaging

31 May 2020
Q. Zhou
Yawen Zhang
Pengcheng Li
Xiaoyong Liu
Jun Yang
Runsheng Wang
Ru Huang
    FedML
ArXivPDFHTML

Papers citing "DaSGD: Squeezing SGD Parallelization Performance in Distributed Training Using Delayed Averaging"

2 / 2 papers shown
Title
Aggregated Residual Transformations for Deep Neural Networks
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Z. Tu
Kaiming He
297
10,220
0
16 Nov 2016
Optimal Distributed Online Prediction using Mini-Batches
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
171
683
0
07 Dec 2010
1