Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2501.19082
Cited By
v1
v2 (latest)
A Bias-Correction Decentralized Stochastic Gradient Algorithm with Momentum Acceleration
31 January 2025
Yuchen Hu
Xi Chen
Weidong Liu
Xiaojun Mao
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"A Bias-Correction Decentralized Stochastic Gradient Algorithm with Momentum Acceleration"
19 / 19 papers shown
Title
An Accelerated Distributed Stochastic Gradient Method with Momentum
Kun-Yen Huang
Shi Pu
Angelia Nedić
276
14
0
15 Feb 2024
Decentralized Federated Learning: Fundamentals, State of the Art, Frameworks, Trends, and Challenges
IEEE Communications Surveys and Tutorials (COMST), 2022
Enrique Tomás Martínez Beltrán
Mario Quiles Pérez
Pedro Miguel Sánchez Sánchez
Sergio López Bernal
Gérome Bovet
M. Pérez
Gregorio Martínez Pérez
Alberto Huertas Celdrán
FedML
411
375
0
15 Nov 2022
Momentum Tracking: Momentum Acceleration for Decentralized Deep Learning on Heterogeneous Data
Yuki Takezawa
Hang Bao
Kenta Niwa
Ryoma Sato
Makoto Yamada
195
24
0
30 Sep 2022
A Unified and Refined Convergence Analysis for Non-Convex Decentralized Learning
Sulaiman A. Alghunaim
Kun Yuan
185
75
0
19 Oct 2021
RelaySum for Decentralized Deep Learning on Heterogeneous Data
Neural Information Processing Systems (NeurIPS), 2021
Thijs Vogels
Lie He
Anastasia Koloskova
Tao Lin
Sai Praneeth Karimireddy
Sebastian U. Stich
Martin Jaggi
FedML
MoE
151
68
0
08 Oct 2021
DecentLaM: Decentralized Momentum SGD for Large-batch Deep Training
IEEE International Conference on Computer Vision (ICCV), 2021
Kun Yuan
Yiming Chen
Xinmeng Huang
Yingya Zhang
Pan Pan
Yinghui Xu
W. Yin
MoE
193
67
0
24 Apr 2021
Quasi-Global Momentum: Accelerating Decentralized Deep Learning on Heterogeneous Data
International Conference on Machine Learning (ICML), 2021
Tao Lin
Sai Praneeth Karimireddy
Sebastian U. Stich
Martin Jaggi
FedML
244
106
0
09 Feb 2021
A general framework for decentralized optimization with first-order methods
Proceedings of the IEEE (Proc. IEEE), 2020
Ran Xin
Shi Pu
Angelia Nedić
U. Khan
155
100
0
12 Sep 2020
Periodic Stochastic Gradient Descent with Momentum for Decentralized Training
Hongchang Gao
Heng-Chiao Huang
121
26
0
24 Aug 2020
A Unified Theory of Decentralized SGD with Changing Topology and Local Updates
International Conference on Machine Learning (ICML), 2020
Anastasia Koloskova
Nicolas Loizou
Sadra Boreiri
Martin Jaggi
Sebastian U. Stich
FedML
443
577
0
23 Mar 2020
The Non-IID Data Quagmire of Decentralized Machine Learning
International Conference on Machine Learning (ICML), 2019
Kevin Hsieh
Amar Phanishayee
O. Mutlu
Phillip B. Gibbons
419
628
0
01 Oct 2019
Bayesian Nonparametric Federated Learning of Neural Networks
International Conference on Machine Learning (ICML), 2019
Mikhail Yurochkin
Mayank Agarwal
S. Ghosh
Kristjan Greenewald
T. Hoang
Y. Khazaeni
FedML
318
813
0
28 May 2019
On the Linear Speedup Analysis of Communication Efficient Momentum SGD for Distributed Non-Convex Optimization
International Conference on Machine Learning (ICML), 2019
Hao Yu
Rong Jin
Sen Yang
FedML
235
406
0
09 May 2019
On the Influence of Bias-Correction on Distributed Stochastic Optimization
Kun Yuan
Sulaiman A. Alghunaim
Bicheng Ying
Ali H. Sayed
206
69
0
26 Mar 2019
Distributed Stochastic Gradient Tracking Methods
Shi Pu
A. Nedić
411
340
0
25 May 2018
D
2
^2
2
: Decentralized Training over Decentralized Data
Hanlin Tang
Xiangru Lian
Ming Yan
Ce Zhang
Ji Liu
230
368
0
19 Mar 2018
Network Topology and Communication-Computation Tradeoffs in Decentralized Optimization
A. Nedić
Alexander Olshevsky
Michael G. Rabbat
226
550
0
26 Sep 2017
Collaborative Deep Learning in Fixed Topology Networks
Zhanhong Jiang
Aditya Balu
Chinmay Hegde
Soumik Sarkar
FedML
160
188
0
23 Jun 2017
Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent
Xiangru Lian
Ce Zhang
Huan Zhang
Cho-Jui Hsieh
Wei Zhang
Ji Liu
473
1,347
0
25 May 2017
1