DoCoM-SGT: Doubly Compressed Momentum-assisted Stochastic Gradient
Tracking Algorithm for Communication Efficient Decentralized Learning
Abstract
This paper proposes the Doubly Compressed Momentum-assisted Stochastic Gradient Tracking algorithm (DoCoM-SGT) for communication efficient decentralized learning. DoCoM-SGT utilizes two compression steps per communication round as the algorithm tracks simultaneously the averaged iterate and stochastic gradient. Furthermore, DoCoM-SGT incorporates a momentum based technique for reducing variances in the gradient estimates. We show that DoCoM-SGT finds a solution in iterations satisfying for non-convex objective functions; and we provide competitive convergence rate guarantees for other function classes. Numerical experiments on synthetic and real datasets validate the efficacy of our algorithm.
View on arXivComments on this paper
