28
1

A Quadratic Synchronization Rule for Distributed Deep Learning

Abstract

In distributed deep learning with data parallelism, synchronizing gradients at each training step can cause a huge communication overhead, especially when many nodes work together to train large models. Local gradient methods, such as Local SGD, address this issue by allowing workers to compute locally for HH steps without synchronizing with others, hence reducing communication frequency. While HH has been viewed as a hyperparameter to trade optimization efficiency for communication cost, recent research indicates that setting a proper HH value can lead to generalization improvement. Yet, selecting a proper HH is elusive. This work proposes a theory-grounded method for determining HH, named the Quadratic Synchronization Rule (QSR), which recommends dynamically setting HH in proportion to 1η2\frac{1}{\eta^2} as the learning rate η\eta decays over time. Extensive ImageNet experiments on ResNet and ViT show that local gradient methods with QSR consistently improve the test accuracy over other synchronization strategies. Compared with the standard data parallel training, QSR enables Local AdamW on ViT-B to cut the training time on 16 or 64 GPUs down from 26.7 to 20.2 hours or from 8.6 to 5.5 hours and, at the same time, achieves 1.16%1.16\% or 0.84%0.84\% higher top-1 validation accuracy.

View on arXiv
Comments on this paper