An Operator Splitting View of Federated Learning

Over the past few years, the federated learning () community has witnessed a proliferation of new algorithms. However, our understating of the theory of is still fragmented, and a thorough, formal comparison of these algorithms remains elusive. Motivated by this gap, we show that many of the existing algorithms can be understood from an operator splitting point of view. This unification allows us to compare different algorithms with ease, to refine previous convergence results and to uncover new algorithmic variants. In particular, our analysis reveals the vital role played by the step size in algorithms. The unification also leads to a streamlined and economic way to accelerate algorithms, without incurring any communication overhead. We perform numerical experiments on both convex and nonconvex models to validate our findings.
View on arXiv@article{malekmohammadi2025_2108.05974, title={ An Operator Splitting View of Federated Learning }, author={ Saber Malekmohammadi and Kiarash Shaloudegi and Zeou Hu and Yaoliang Yu }, journal={arXiv preprint arXiv:2108.05974}, year={ 2025 } }