DC-S3GD: Delay-Compensated Stale-Synchronous SGD for Large-Scale
Decentralized Neural Network Training
Dynamic Languages Symposium (DLS), 2019
Abstract
Data parallelism has become the de facto standard for training Deep Neural Network on multiple processing units. In this work we propose DC-S3GD, a decentralized (without Parameter Server) stale-synchronous version of the Delay-Compensated Asynchronous Stochastic Gradient Descent (DC-ASGD) algorithm. In our approach, we allow for the overlap of computation and communication, and compensate the inherent error with a first-order correction of the gradients. We prove the effectiveness of our approach by training Convolutional Neural Network with large batches and achieving state-of-the-art results.
View on arXivComments on this paper
