Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2005.07041
Cited By
SQuARM-SGD: Communication-Efficient Momentum SGD for Decentralized Optimization
13 May 2020
Navjot Singh
Deepesh Data
Jemin George
Suhas Diggavi
Re-assign community
ArXiv
PDF
HTML
Papers citing
"SQuARM-SGD: Communication-Efficient Momentum SGD for Decentralized Optimization"
8 / 8 papers shown
Title
Communication Optimization for Decentralized Learning atop Bandwidth-limited Edge Networks
Tingyang Sun
Tuan Nguyen
Ting He
35
0
0
16 Apr 2025
Convergence and Privacy of Decentralized Nonconvex Optimization with Gradient Clipping and Communication Compression
Boyue Li
Yuejie Chi
21
12
0
17 May 2023
CEDAS: A Compressed Decentralized Stochastic Gradient Method with Improved Convergence
Kun-Yen Huang
Shin-Yi Pu
30
9
0
14 Jan 2023
BEER: Fast
O
(
1
/
T
)
O(1/T)
O
(
1/
T
)
Rate for Decentralized Nonconvex Optimization with Communication Compression
Haoyu Zhao
Boyue Li
Zhize Li
Peter Richtárik
Yuejie Chi
19
48
0
31 Jan 2022
Sample and Communication-Efficient Decentralized Actor-Critic Algorithms with Finite-Time Analysis
Ziyi Chen
Yi Zhou
Rongrong Chen
Shaofeng Zou
13
24
0
08 Sep 2021
QuPeD: Quantized Personalization via Distillation with Applications to Federated Learning
Kaan Ozkara
Navjot Singh
Deepesh Data
Suhas Diggavi
FedML
MQ
24
56
0
29 Jul 2021
A Field Guide to Federated Optimization
Jianyu Wang
Zachary B. Charles
Zheng Xu
Gauri Joshi
H. B. McMahan
...
Mi Zhang
Tong Zhang
Chunxiang Zheng
Chen Zhu
Wennan Zhu
FedML
181
411
0
14 Jul 2021
Federated Learning with Compression: Unified Analysis and Sharp Guarantees
Farzin Haddadpour
Mohammad Mahdi Kamani
Aryan Mokhtari
M. Mahdavi
FedML
23
271
0
02 Jul 2020
1