397
v1v2v3v4v5 (latest)

One-Shot Federated Learning: Theoretical Limits and Algorithms to Achieve Them

Journal of machine learning research (JMLR), 2019
Abstract

We consider distributed statistical optimization in one-shot setting, where there are mm machines each observing nn i.i.d. samples. Based on its observed samples, each machine sends a BB-bit-long message to a server. The server then collects messages from all machines, and estimates a parameter that minimizes an expected convex loss function. We investigate the impact of communication constraint, BB, on the expected error and derive a tight lower bound on the error achievable by any algorithm. We then propose an estimator, which we call Multi-Resolution Estimator (MRE), whose expected error (when BlogmnB\ge\log mn) meets the aforementioned lower bound up to poly-logarithmic factors, and is thereby order optimal. We also address the problem of learning under tiny communication budget, and present lower and upper error bounds when BB is a constant. The expected error of MRE, unlike existing algorithms, tends to zero as the number of machines (mm) goes to infinity, even when the number of samples per machine (nn) remains upper bounded by a constant. This property of the MRE algorithm makes it applicable in new machine learning paradigms where mm is much larger than nn.

View on arXiv
Comments on this paper