176

Asynchronous decentralized accelerated stochastic gradient descent

IEEE Journal on Selected Areas in Information Theory (JSAIT), 2018
Abstract

In this work, we introduce an asynchronous decentralized accelerated stochastic gradient descent type of method for decentralized stochastic optimization, considering communication and synchronization are the major bottlenecks. We establish O(1/ϵ)\mathcal{O}(1/\epsilon) (resp., O(1/ϵ)\mathcal{O}(1/\sqrt{\epsilon})) communication complexity and O(1/ϵ2)\mathcal{O}(1/\epsilon^2) (resp., O(1/ϵ)\mathcal{O}(1/\epsilon)) sampling complexity for solving general convex (resp., strongly convex) problems.

View on arXiv
Comments on this paper