67
43

Fundamental Limits of Approximate Gradient Coding

Abstract

It has been established that when the gradient coding problem is distributed among nn servers, the computation load (number of stored data partitions) of each worker is at least s+1s+1 in order to resists ss stragglers. This scheme incurs a large overhead when the number of stragglers ss is large. In this paper, we focus on a new framework called \emph{approximate gradient coding} to mitigate stragglers in distributed learning. We show that, to exactly recover the gradient with high probability, the computation load is lower bounded by O(log(n)/log(n/s))O(\log(n)/\log(n/s)). We also propose a code that exactly matches such lower bound. We identify a fundamental three-fold tradeoff for any approximate gradient coding scheme dO(log(1/ϵ)/log(n/s))d\geq O(\log(1/\epsilon)/\log(n/s)), where dd is the computation load, ϵ\epsilon is the error of gradient. We give an explicit code construction based on random edge removal process that achieves the derived tradeoff. We implement our schemes and demonstrate the advantage of the approaches over the current fastest gradient coding strategies.

View on arXiv
Comments on this paper