168
v1v2v3 (latest)

Order Optimal Bounds for One-Shot Federated Learning over non-Convex Loss Functions

Abstract

We consider the problem of federated learning in a one-shot setting in which there are mm machines, each observing nn sample functions from an unknown distribution on non-convex loss functions. Let F:[1,1]dRF:[-1,1]^d\to\mathbb{R} be the expected loss function with respect to this unknown distribution. The goal is to find an estimate of the minimizer of FF. Based on its observations, each machine generates a signal of bounded length BB and sends it to a server. The server collects signals of all machines and outputs an estimate of the minimizer of FF. We show that the expected loss of any algorithm is lower bounded by max(1/(n(mB)1/d),1/mn)\max\big(1/(\sqrt{n}(mB)^{1/d}), 1/\sqrt{mn}\big), up to a logarithmic factor. We then prove that this lower bound is order optimal in mm and nn by presenting a distributed learning algorithm, called Multi-Resolution Estimator for Non-Convex loss function (MRE-NC), whose expected loss matches the lower bound for large mnmn up to polylogarithmic factors.

View on arXiv
Comments on this paper