472

A Non-Asymptotic Analysis of Network Independence for Distributed Stochastic Gradient Descent

Abstract

This paper is concerned with minimizing the average of nn cost functions over a network, in which agents may communicate and exchange information with their peers in the network. Specifically, we consider the setting where only noisy gradient information is available. To solve the problem, we study the standard distributed stochastic gradient descent (DSGD) method and perform a non-asymptotic convergence analysis. For strongly convex and smooth objective functions, we not only show that DSGD asymptotically achieves the optimal network independent convergence rate compared to centralized stochastic gradient descent (SGD), but also explicitly identify the non-asymptotic convergence rate as a function of characteristics of the objective functions and the network. Furthermore, we derive the time needed for DSGD to approach the asymptotic convergence rate, which behaves as KT=O(n16/15(1ρw)31/15)K_T=\mathcal{O}(\frac{n^{16/15}}{(1-\rho_w)^{31/15}}), where (1ρw)(1-\rho_w) denotes the spectral gap of the mixing matrix of communicating agents.

View on arXiv
Comments on this paper