ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.19082
  4. Cited By
A Bias-Correction Decentralized Stochastic Gradient Algorithm with Momentum Acceleration
v1v2 (latest)

A Bias-Correction Decentralized Stochastic Gradient Algorithm with Momentum Acceleration

31 January 2025
Yuchen Hu
Xi Chen
Weidong Liu
Xiaojun Mao
ArXiv (abs)PDFHTML

Papers citing "A Bias-Correction Decentralized Stochastic Gradient Algorithm with Momentum Acceleration"

19 / 19 papers shown
Title
An Accelerated Distributed Stochastic Gradient Method with Momentum
An Accelerated Distributed Stochastic Gradient Method with Momentum
Kun-Yen Huang
Shi Pu
Angelia Nedić
276
14
0
15 Feb 2024
Decentralized Federated Learning: Fundamentals, State of the Art,
  Frameworks, Trends, and Challenges
Decentralized Federated Learning: Fundamentals, State of the Art, Frameworks, Trends, and ChallengesIEEE Communications Surveys and Tutorials (COMST), 2022
Enrique Tomás Martínez Beltrán
Mario Quiles Pérez
Pedro Miguel Sánchez Sánchez
Sergio López Bernal
Gérome Bovet
M. Pérez
Gregorio Martínez Pérez
Alberto Huertas Celdrán
FedML
411
375
0
15 Nov 2022
Momentum Tracking: Momentum Acceleration for Decentralized Deep Learning
  on Heterogeneous Data
Momentum Tracking: Momentum Acceleration for Decentralized Deep Learning on Heterogeneous Data
Yuki Takezawa
Hang Bao
Kenta Niwa
Ryoma Sato
Makoto Yamada
195
24
0
30 Sep 2022
A Unified and Refined Convergence Analysis for Non-Convex Decentralized
  Learning
A Unified and Refined Convergence Analysis for Non-Convex Decentralized Learning
Sulaiman A. Alghunaim
Kun Yuan
185
75
0
19 Oct 2021
RelaySum for Decentralized Deep Learning on Heterogeneous Data
RelaySum for Decentralized Deep Learning on Heterogeneous DataNeural Information Processing Systems (NeurIPS), 2021
Thijs Vogels
Lie He
Anastasia Koloskova
Tao Lin
Sai Praneeth Karimireddy
Sebastian U. Stich
Martin Jaggi
FedMLMoE
151
68
0
08 Oct 2021
DecentLaM: Decentralized Momentum SGD for Large-batch Deep Training
DecentLaM: Decentralized Momentum SGD for Large-batch Deep TrainingIEEE International Conference on Computer Vision (ICCV), 2021
Kun Yuan
Yiming Chen
Xinmeng Huang
Yingya Zhang
Pan Pan
Yinghui Xu
W. Yin
MoE
193
67
0
24 Apr 2021
Quasi-Global Momentum: Accelerating Decentralized Deep Learning on
  Heterogeneous Data
Quasi-Global Momentum: Accelerating Decentralized Deep Learning on Heterogeneous DataInternational Conference on Machine Learning (ICML), 2021
Tao Lin
Sai Praneeth Karimireddy
Sebastian U. Stich
Martin Jaggi
FedML
244
106
0
09 Feb 2021
A general framework for decentralized optimization with first-order
  methods
A general framework for decentralized optimization with first-order methodsProceedings of the IEEE (Proc. IEEE), 2020
Ran Xin
Shi Pu
Angelia Nedić
U. Khan
155
100
0
12 Sep 2020
Periodic Stochastic Gradient Descent with Momentum for Decentralized
  Training
Periodic Stochastic Gradient Descent with Momentum for Decentralized Training
Hongchang Gao
Heng-Chiao Huang
121
26
0
24 Aug 2020
A Unified Theory of Decentralized SGD with Changing Topology and Local
  Updates
A Unified Theory of Decentralized SGD with Changing Topology and Local UpdatesInternational Conference on Machine Learning (ICML), 2020
Anastasia Koloskova
Nicolas Loizou
Sadra Boreiri
Martin Jaggi
Sebastian U. Stich
FedML
443
577
0
23 Mar 2020
The Non-IID Data Quagmire of Decentralized Machine Learning
The Non-IID Data Quagmire of Decentralized Machine LearningInternational Conference on Machine Learning (ICML), 2019
Kevin Hsieh
Amar Phanishayee
O. Mutlu
Phillip B. Gibbons
419
628
0
01 Oct 2019
Bayesian Nonparametric Federated Learning of Neural Networks
Bayesian Nonparametric Federated Learning of Neural NetworksInternational Conference on Machine Learning (ICML), 2019
Mikhail Yurochkin
Mayank Agarwal
S. Ghosh
Kristjan Greenewald
T. Hoang
Y. Khazaeni
FedML
318
813
0
28 May 2019
On the Linear Speedup Analysis of Communication Efficient Momentum SGD
  for Distributed Non-Convex Optimization
On the Linear Speedup Analysis of Communication Efficient Momentum SGD for Distributed Non-Convex OptimizationInternational Conference on Machine Learning (ICML), 2019
Hao Yu
Rong Jin
Sen Yang
FedML
235
406
0
09 May 2019
On the Influence of Bias-Correction on Distributed Stochastic
  Optimization
On the Influence of Bias-Correction on Distributed Stochastic Optimization
Kun Yuan
Sulaiman A. Alghunaim
Bicheng Ying
Ali H. Sayed
206
69
0
26 Mar 2019
Distributed Stochastic Gradient Tracking Methods
Distributed Stochastic Gradient Tracking Methods
Shi Pu
A. Nedić
411
340
0
25 May 2018
D$^2$: Decentralized Training over Decentralized Data
D2^22: Decentralized Training over Decentralized Data
Hanlin Tang
Xiangru Lian
Ming Yan
Ce Zhang
Ji Liu
230
368
0
19 Mar 2018
Network Topology and Communication-Computation Tradeoffs in
  Decentralized Optimization
Network Topology and Communication-Computation Tradeoffs in Decentralized Optimization
A. Nedić
Alexander Olshevsky
Michael G. Rabbat
226
550
0
26 Sep 2017
Collaborative Deep Learning in Fixed Topology Networks
Collaborative Deep Learning in Fixed Topology Networks
Zhanhong Jiang
Aditya Balu
Chinmay Hegde
Soumik Sarkar
FedML
160
188
0
23 Jun 2017
Can Decentralized Algorithms Outperform Centralized Algorithms? A Case
  Study for Decentralized Parallel Stochastic Gradient Descent
Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent
Xiangru Lian
Ce Zhang
Huan Zhang
Cho-Jui Hsieh
Wei Zhang
Ji Liu
473
1,347
0
25 May 2017
1