24
33

Asynchronous Stochastic Optimization Robust to Arbitrary Delays

Abstract

We consider stochastic optimization with delayed gradients where, at each time step tt, the algorithm makes an update using a stale stochastic gradient from step tdtt - d_t for some arbitrary delay dtd_t. This setting abstracts asynchronous distributed optimization where a central server receives gradient updates computed by worker machines. These machines can experience computation and communication loads that might vary significantly over time. In the general non-convex smooth optimization setting, we give a simple and efficient algorithm that requires O(σ2/ϵ4+τ/ϵ2)O( \sigma^2/\epsilon^4 + \tau/\epsilon^2 ) steps for finding an ϵ\epsilon-stationary point xx, where τ\tau is the \emph{average} delay 1Tt=1Tdt\smash{\frac{1}{T}\sum_{t=1}^T d_t} and σ2\sigma^2 is the variance of the stochastic gradients. This improves over previous work, which showed that stochastic gradient decent achieves the same rate but with respect to the \emph{maximal} delay maxtdt\max_{t} d_t, that can be significantly larger than the average delay especially in heterogeneous distributed systems. Our experiments demonstrate the efficacy and robustness of our algorithm in cases where the delay distribution is skewed or heavy-tailed.

View on arXiv
Comments on this paper