ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.05985
23
0

Asynchronous Stochastic Gradient Descent with Decoupled Backpropagation and Layer-Wise Updates

8 October 2024
Cabrel Teguemne Fokam
Khaleelulla Khan Nazeer
Lukas König
David Kappel
Anand Subramoney
ArXivPDFHTML
Abstract

The increasing size of deep learning models has made distributed training across multiple devices essential. However, current methods such as distributed data-parallel training suffer from large communication and synchronization overheads when training across devices, leading to longer training times as a result of suboptimal hardware utilization. Asynchronous stochastic gradient descent (ASGD) methods can improve training speed, but are sensitive to delays due to both communication and differences throughput. Moreover, the backpropagation algorithm used within ASGD workers is bottlenecked by the interlocking between its forward and backward passes. Current methods also do not take advantage of the large differences in the computation required for the forward and backward passes. Therefore, we propose an extension to ASGD called Partial Decoupled ASGD (PD-ASGD) that addresses these issues. PD-ASGD uses separate threads for the forward and backward passes, decoupling the updates and allowing for a higher ratio of forward to backward threads than the usual 1:1 ratio, leading to higher throughput. PD-ASGD also performs layer-wise (partial) model updates concurrently across multiple threads. This reduces parameter staleness and consequently improves robustness to delays. Our approach yields close to state-of-the-art results while running up to 5.95×5.95\times5.95× faster than synchronous data parallelism in the presence of delays, and up to 2.14×2.14\times2.14× times faster than comparable ASGD algorithms by achieving higher model flops utilization. We mathematically describe the gradient bias introduced by our method, establish an upper bound, and prove convergence.

View on arXiv
@article{fokam2025_2410.05985,
  title={ Asynchronous Stochastic Gradient Descent with Decoupled Backpropagation and Layer-Wise Updates },
  author={ Cabrel Teguemne Fokam and Khaleelulla Khan Nazeer and Lukas König and David Kappel and Anand Subramoney },
  journal={arXiv preprint arXiv:2410.05985},
  year={ 2025 }
}
Comments on this paper