ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.01972
10
15

Epidemic Learning: Boosting Decentralized Learning with Randomized Communication

3 October 2023
M. Vos
Sadegh Farhadkhani
R. Guerraoui
Anne-Marie Kermarrec
Rafael Pires
Rishi Sharma
ArXivPDFHTML
Abstract

We present Epidemic Learning (EL), a simple yet powerful decentralized learning (DL) algorithm that leverages changing communication topologies to achieve faster model convergence compared to conventional DL approaches. At each round of EL, each node sends its model updates to a random sample of sss other nodes (in a system of nnn nodes). We provide an extensive theoretical analysis of EL, demonstrating that its changing topology culminates in superior convergence properties compared to the state-of-the-art (static and dynamic) topologies. Considering smooth non-convex loss functions, the number of transient iterations for EL, i.e., the rounds required to achieve asymptotic linear speedup, is in O(n3/s2)O(n^3/s^2)O(n3/s2) which outperforms the best-known bound O(n3)O(n^3)O(n3) by a factor of s2s^2s2, indicating the benefit of randomized communication for DL. We empirically evaluate EL in a 96-node network and compare its performance with state-of-the-art DL approaches. Our results illustrate that EL converges up to 1.7× 1.7\times1.7× quicker than baseline DL algorithms and attains 2.22.2 2.2\% higher accuracy for the same communication volume.

View on arXiv
Comments on this paper