ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2304.12216
13
10

More Communication Does Not Result in Smaller Generalization Error in Federated Learning

24 April 2023
Romain Chor
Milad Sefidgaran
A. Zaidi
    FedML
    AI4CE
ArXivPDFHTML
Abstract

We study the generalization error of statistical learning models in a Federated Learning (FL) setting. Specifically, there are KKK devices or clients, each holding an independent own dataset of size nnn. Individual models, learned locally via Stochastic Gradient Descent, are aggregated (averaged) by a central server into a global model and then sent back to the devices. We consider multiple (say R∈N∗R \in \mathbb N^*R∈N∗) rounds of model aggregation and study the effect of RRR on the generalization error of the final aggregated model. We establish an upper bound on the generalization error that accounts explicitly for the effect of RRR (in addition to the number of participating devices KKK and dataset size nnn). It is observed that, for fixed (n,K)(n, K)(n,K), the bound increases with RRR, suggesting that the generalization of such learning algorithms is negatively affected by more frequent communication with the parameter server. Combined with the fact that the empirical risk, however, generally decreases for larger values of RRR, this indicates that RRR might be a parameter to optimize to reduce the population risk of FL algorithms. The results of this paper, which extend straightforwardly to the heterogeneous data setting, are also illustrated through numerical examples.

View on arXiv
Comments on this paper