ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1907.02189
15
2275

On the Convergence of FedAvg on Non-IID Data

4 July 2019
Xiang Li
Kaixuan Huang
Wenhao Yang
Shusen Wang
Zhihua Zhang
    FedML
ArXivPDFHTML
Abstract

Federated learning enables a large amount of edge computing devices to jointly learn a model without data sharing. As a leading algorithm in this setting, Federated Averaging (\texttt{FedAvg}) runs Stochastic Gradient Descent (SGD) in parallel on a small subset of the total devices and averages the sequences only once in a while. Despite its simplicity, it lacks theoretical guarantees under realistic settings. In this paper, we analyze the convergence of \texttt{FedAvg} on non-iid data and establish a convergence rate of O(1T)\mathcal{O}(\frac{1}{T})O(T1​) for strongly convex and smooth problems, where TTT is the number of SGDs. Importantly, our bound demonstrates a trade-off between communication-efficiency and convergence rate. As user devices may be disconnected from the server, we relax the assumption of full device participation to partial device participation and study different averaging schemes; low device participation rate can be achieved without severely slowing down the learning. Our results indicate that heterogeneity of data slows down the convergence, which matches empirical observations. Furthermore, we provide a necessary condition for \texttt{FedAvg} on non-iid data: the learning rate η\etaη must decay, even if full-gradient is used; otherwise, the solution will be Ω(η)\Omega (\eta)Ω(η) away from the optimal.

View on arXiv
Comments on this paper