Order Optimal Bounds for One-Shot Federated Learning over non-Convex
Loss Functions
- FedML

We consider the problem of federated learning in a one-shot setting in which there are machines, each observing sample functions from an unknown distribution on non-convex loss functions. Let be the expected loss function with respect to this unknown distribution. The goal is to find an estimate of the minimizer of . Based on its observations, each machine generates a signal of bounded length and sends it to a server. The server collects signals of all machines and outputs an estimate of the minimizer of . We show that the expected loss of any algorithm is lower bounded by , up to a logarithmic factor. We then prove that this lower bound is order optimal in and by presenting a distributed learning algorithm, called Multi-Resolution Estimator for Non-Convex loss function (MRE-NC), whose expected loss matches the lower bound for large up to polylogarithmic factors.
View on arXiv