326
v1v2 (latest)

On the Convergence of a Federated Expectation-Maximization Algorithm

Main:9 Pages
8 Figures
Bibliography:5 Pages
Appendix:21 Pages
Abstract

Data heterogeneity has been a long-standing bottleneck in studying the convergence rates of Federated Learning algorithms. In order to better understand the issue of data heterogeneity, we study the convergence rate of the Expectation-Maximization (EM) algorithm for the Federated Mixture of KK Linear Regressions model (FMLR). We completely characterize the convergence rate of the EM algorithm under all regimes of m/nm/n where mm is the number of clients and nn is the number of data points per client. We show that with a signal-to-noise-ratio (SNR) of order Ω(K)\Omega(\sqrt{K}), the well-initialized EM algorithm converges within the minimax distance of the ground truth under all regimes. Interestingly, we identify that when the number of clients grows reasonably with respect to the number of data points per client, the EM algorithm only requires a constant number of iterations to converge. We perform experiments on synthetic data to illustrate our results. In line with our theoretical findings, the simulations show that rather than being a bottleneck, data heterogeneity can accelerate the convergence of iterative federated algorithms.

View on arXiv
Comments on this paper