ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.09126
  4. Cited By
Communication-Efficient Local Decentralized SGD Methods

Communication-Efficient Local Decentralized SGD Methods

21 October 2019
Xiang Li
Wenhao Yang
Shusen Wang
Zhihua Zhang
ArXivPDFHTML

Papers citing "Communication-Efficient Local Decentralized SGD Methods"

11 / 11 papers shown
Title
Decentralized Stochastic Gradient Descent Ascent for Finite-Sum Minimax
  Problems
Decentralized Stochastic Gradient Descent Ascent for Finite-Sum Minimax Problems
Hongchang Gao
16
16
0
06 Dec 2022
FedCut: A Spectral Analysis Framework for Reliable Detection of
  Byzantine Colluders
FedCut: A Spectral Analysis Framework for Reliable Detection of Byzantine Colluders
Hanlin Gu
Lixin Fan
Xingxing Tang
Qiang Yang
AAML
FedML
20
1
0
24 Nov 2022
NET-FLEET: Achieving Linear Convergence Speedup for Fully Decentralized
  Federated Learning with Heterogeneous Data
NET-FLEET: Achieving Linear Convergence Speedup for Fully Decentralized Federated Learning with Heterogeneous Data
Xin Zhang
Minghong Fang
Zhuqing Liu
Haibo Yang
Jia-Wei Liu
Zhengyuan Zhu
FedML
13
14
0
17 Aug 2022
FedSSO: A Federated Server-Side Second-Order Optimization Algorithm
FedSSO: A Federated Server-Side Second-Order Optimization Algorithm
Xinteng Ma
Renyi Bao
Jinpeng Jiang
Yang Liu
Arthur Jiang
Junhua Yan
Xin Liu
Zhisong Pan
FedML
27
6
0
20 Jun 2022
Federated Learning with Buffered Asynchronous Aggregation
Federated Learning with Buffered Asynchronous Aggregation
John Nguyen
Kshitiz Malik
Hongyuan Zhan
Ashkan Yousefpour
Michael G. Rabbat
Mani Malek
Dzmitry Huba
FedML
21
288
0
11 Jun 2021
Moshpit SGD: Communication-Efficient Decentralized Training on
  Heterogeneous Unreliable Devices
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices
Max Ryabinin
Eduard A. Gorbunov
Vsevolod Plokhotnyuk
Gennady Pekhimenko
27
31
0
04 Mar 2021
Local Stochastic Gradient Descent Ascent: Convergence Analysis and
  Communication Efficiency
Local Stochastic Gradient Descent Ascent: Convergence Analysis and Communication Efficiency
Yuyang Deng
M. Mahdavi
27
58
0
25 Feb 2021
MARINA: Faster Non-Convex Distributed Learning with Compression
MARINA: Faster Non-Convex Distributed Learning with Compression
Eduard A. Gorbunov
Konstantin Burlachenko
Zhize Li
Peter Richtárik
28
108
0
15 Feb 2021
Federated Mutual Learning
Federated Mutual Learning
T. Shen
Jie M. Zhang
Xinkang Jia
Fengda Zhang
Gang Huang
Pan Zhou
Kun Kuang
Fei Wu
Chao-Xiang Wu
FedML
17
119
0
27 Jun 2020
A Unified Theory of Decentralized SGD with Changing Topology and Local
  Updates
A Unified Theory of Decentralized SGD with Changing Topology and Local Updates
Anastasia Koloskova
Nicolas Loizou
Sadra Boreiri
Martin Jaggi
Sebastian U. Stich
FedML
39
491
0
23 Mar 2020
On the Convergence of Local Descent Methods in Federated Learning
On the Convergence of Local Descent Methods in Federated Learning
Farzin Haddadpour
M. Mahdavi
FedML
19
265
0
31 Oct 2019
1