Fundamental Limits of Approximate Gradient Coding

It has been established that when the gradient coding problem is distributed among servers, the computation load (number of stored data partitions) of each worker is at least in order to resists stragglers. This scheme incurs a large overhead when the number of stragglers is large. In this paper, we focus on a new framework called \emph{approximate gradient coding} to mitigate stragglers in distributed learning. We show that, to exactly recover the gradient with high probability, the computation load is lower bounded by . We also propose a code that exactly matches such lower bound. We identify a fundamental three-fold tradeoff for any approximate gradient coding scheme , where is the computation load, is the error of gradient. We give an explicit code construction based on random edge removal process that achieves the derived tradeoff. We implement our schemes and demonstrate the advantage of the approaches over the current fastest gradient coding strategies.
View on arXiv