Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
1712.02679
Cited By
AdaComp : Adaptive Residual Gradient Compression for Data-Parallel Distributed Training
7 December 2017
Chia-Yu Chen
Jungwook Choi
D. Brand
A. Agrawal
Wei Zhang
K. Gopalakrishnan
ODL
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"AdaComp : Adaptive Residual Gradient Compression for Data-Parallel Distributed Training"
50 / 65 papers shown
FedSparQ: Adaptive Sparse Quantization with Error Feedback for Robust & Efficient Federated Learning
Chaimaa Medjadji
Sadi Alawadi
Feras M. Awaysheh
Guilain Leduc
Sylvain Kubler
Yves Le Traon
FedML
MQ
249
0
0
05 Nov 2025
Novel Gradient Sparsification Algorithm via Bayesian Inference
International Workshop on Machine Learning for Signal Processing (MLSP), 2024
Ali Bereyhi
B. Liang
G. Boudreau
Ali Afana
216
5
0
23 Sep 2024
I/O in Machine Learning Applications on HPC Systems: A 360-degree Survey
Noah Lewis
J. L. Bez
Suren Byna
487
4
0
16 Apr 2024
Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System
International Symposium on High-Performance Computer Architecture (HPCA), 2024
Hongsun Jang
Jaeyong Song
Jaewon Jung
Jaeyoung Park
Youngsok Kim
Jinho Lee
162
28
0
11 Mar 2024
Preserving Near-Optimal Gradient Sparsification Cost for Scalable Distributed Deep Learning
Daegun Yoon
Sangyoon Oh
137
2
0
21 Feb 2024
Communication-Efficient Distributed Learning with Local Immediate Error Compensation
Yifei Cheng
Li Shen
Linli Xu
Xun Qian
Shiwei Wu
Yiming Zhou
Tie Zhang
Dacheng Tao
Enhong Chen
224
1
0
19 Feb 2024
Temporal Knowledge Distillation for Time-Sensitive Financial Services Applications
Hongda Shen
Eren Kurshan
AAML
184
3
0
28 Dec 2023
FedSZ: Leveraging Error-Bounded Lossy Compression for Federated Learning Communications
Grant Wilkins
Sheng Di
Jon C. Calhoun
Zilinghan Li
Kibaek Kim
Robert Underwood
Richard Mortier
Franck Cappello
FedML
256
9
0
20 Dec 2023
Near-Linear Scaling Data Parallel Training with Overlapping-Aware Gradient Compression
Lin Meng
Yuzhong Sun
Weimin Li
224
4
0
08 Nov 2023
MiCRO: Near-Zero Cost Gradient Sparsification for Scaling and Accelerating Distributed DNN Training
International Conference on High Performance Computing (HiPC), 2023
Daegun Yoon
Sangyoon Oh
209
2
0
02 Oct 2023
DEFT: Exploiting Gradient Norm Difference between Model Layers for Scalable Gradient Sparsification
International Conference on Parallel Processing (ICPP), 2023
Daegun Yoon
Sangyoon Oh
256
3
0
07 Jul 2023
Optimus-CC: Efficient Large NLP Model Training with 3D Parallelism Aware Communication Compression
International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), 2023
Jaeyong Song
Jinkyu Yim
Jaewon Jung
Hongsun Jang
H. Kim
Youngsok Kim
Jinho Lee
GNN
271
39
0
24 Jan 2023
L-GreCo: Layerwise-Adaptive Gradient Compression for Efficient and Accurate Deep Learning
Mohammadreza Alimohammadi
I. Markov
Elias Frantar
Dan Alistarh
219
4
0
31 Oct 2022
Approximate Computing and the Efficient Machine Learning Expedition
J. Henkel
Hai Helen Li
A. Raghunathan
M. Tahoori
Swagath Venkataramani
Xiaoxuan Yang
Georgios Zervakis
197
23
0
02 Oct 2022
Empirical Analysis on Top-k Gradient Sparsification for Distributed Deep Learning in a Supercomputing Environment
Daegun Yoon
Sangyoon Oh
191
0
0
18 Sep 2022
Reconciling Security and Communication Efficiency in Federated Learning
IEEE Data Engineering Bulletin (DEB), 2022
Karthik Prasad
Sayan Ghosh
Graham Cormode
Ilya Mironov
Ashkan Yousefpour
Pierre Stock
FedML
174
11
0
26 Jul 2022
sqSGD: Locally Private and Communication Efficient Federated Learning
Yan Feng
Tao Xiong
Ruofan Wu
Lingjuan Lv
Leilei Shi
FedML
175
2
0
21 Jun 2022
Rate-Distortion Theoretic Bounds on Generalization Error for Distributed Learning
Neural Information Processing Systems (NeurIPS), 2022
Romain Chor
Abdellatif Zaidi
Milad Sefidgaran
FedML
283
18
0
06 Jun 2022
ByteComp: Revisiting Gradient Compression in Distributed Training
Zhuang Wang
Yanghua Peng
Yibo Zhu
T. Ng
209
2
0
28 May 2022
Efficient Direct-Connect Topologies for Collective Communications
Symposium on Networked Systems Design and Implementation (NSDI), 2022
Liangyu Zhao
Siddharth Pal
Tapan Chugh
Weiyang Wang
Jason Fantl
P. Basu
J. Khoury
Arvind Krishnamurthy
385
14
0
07 Feb 2022
TopoOpt: Co-optimizing Network Topology and Parallelization Strategy for Distributed Training Jobs
Symposium on Networked Systems Design and Implementation (NSDI), 2022
Weiyang Wang
Moein Khazraee
Zhizhen Zhong
M. Ghobadi
Zhihao Jia
Dheevatsa Mudigere
Ying Zhang
A. Kewitsch
452
143
0
01 Feb 2022
ColBERTv2: Effective and Efficient Retrieval via Lightweight Late Interaction
Keshav Santhanam
Omar Khattab
Jon Saad-Falcon
Christopher Potts
Matei A. Zaharia
480
572
0
02 Dec 2021
Doing More by Doing Less: How Structured Partial Backpropagation Improves Deep Learning Clusters
Adarsh Kumar
Kausik Subramanian
Shivaram Venkataraman
Aditya Akella
123
6
0
20 Nov 2021
CGX: Adaptive System Support for Communication-Efficient Deep Learning
I. Markov
Hamidreza Ramezanikebrya
Dan Alistarh
GNN
330
5
0
16 Nov 2021
Resource-Efficient Federated Learning
European Conference on Computer Systems (EuroSys), 2021
A. Abdelmoniem
Atal Narayan Sahu
Marco Canini
Suhaib A. Fahmy
FedML
250
69
0
01 Nov 2021
Large-Scale Deep Learning Optimizations: A Comprehensive Survey
Xiaoxin He
Fuzhao Xue
Xiaozhe Ren
Yang You
325
18
0
01 Nov 2021
Revealing and Protecting Labels in Distributed Training
Neural Information Processing Systems (NeurIPS), 2021
Trung D. Q. Dang
Om Thakkar
Swaroop Indra Ramaswamy
Rajiv Mathews
Peter Chin
Franccoise Beaufays
103
29
0
31 Oct 2021
A Distributed SGD Algorithm with Global Sketching for Deep Learning Training Acceleration
Lingfei Dai
Boyu Diao
Chao Li
Yongjun Xu
207
5
0
13 Aug 2021
CD-SGD: Distributed Stochastic Gradient Descent with Compression and Delay Compensation
International Conference on Parallel Processing (ICPP), 2021
Enda Yu
Dezun Dong
Yemao Xu
Shuo Ouyang
Xiangke Liao
149
6
0
21 Jun 2021
ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training
Neural Information Processing Systems (NeurIPS), 2021
Chia-Yu Chen
Jiamin Ni
Songtao Lu
Xiaodong Cui
Pin-Yu Chen
...
Naigang Wang
Swagath Venkataramani
Vijayalakshmi Srinivasan
Wei Zhang
K. Gopalakrishnan
165
73
0
21 Apr 2021
MergeComp: A Compression Scheduler for Scalable Communication-Efficient Distributed Training
Zhuang Wang
X. Wu
T. Ng
GNN
109
4
0
28 Mar 2021
Pufferfish: Communication-efficient Models At No Extra Cost
Conference on Machine Learning and Systems (MLSys), 2021
Hongyi Wang
Saurabh Agarwal
Dimitris Papailiopoulos
142
67
0
05 Mar 2021
On the Impact of Device and Behavioral Heterogeneity in Federated Learning
A. Abdelmoniem
Chen-Yu Ho
Pantelis Papageorgiou
Muhammad Bilal
Marco Canini
FedML
144
18
0
15 Feb 2021
An Efficient Statistical-based Gradient Compression Technique for Distributed Training Systems
Conference on Machine Learning and Systems (MLSys), 2021
A. Abdelmoniem
Ahmed Elzanaty
Mohamed-Slim Alouini
Marco Canini
224
92
0
26 Jan 2021
DynaComm: Accelerating Distributed CNN Training between Edges and Clouds through Dynamic Communication Scheduling
IEEE Journal on Selected Areas in Communications (JSAC), 2021
Shangming Cai
Dongsheng Wang
Haixia Wang
Yongqiang Lyu
Guangquan Xu
Xi Zheng
A. Vasilakos
236
8
0
20 Jan 2021
Accordion: Adaptive Gradient Communication via Critical Learning Regime Identification
Saurabh Agarwal
Hongyi Wang
Kangwook Lee
Shivaram Venkataraman
Dimitris Papailiopoulos
185
27
0
29 Oct 2020
Fairness-aware Agnostic Federated Learning
SDM (SDM), 2020
Wei Du
Depeng Xu
Xintao Wu
Hanghang Tong
FedML
225
150
0
10 Oct 2020
Descending through a Crowded Valley - Benchmarking Deep Learning Optimizers
Robin M. Schmidt
Frank Schneider
Philipp Hennig
ODL
798
186
0
03 Jul 2020
Is Network the Bottleneck of Distributed Training?
Zhen Zhang
Chaokun Chang
Yanghua Peng
Yida Wang
R. Arora
Xin Jin
251
92
0
17 Jun 2020
Characterizing Impacts of Heterogeneity in Federated Learning upon Large-Scale Smartphone Data
Chengxu Yang
Qipeng Wang
Mengwei Xu
Shangguang Wang
Kaigui Bian
Yunxin Liu
Xuanzhe Liu
173
24
0
12 Jun 2020
Communication-Efficient Distributed Deep Learning: A Comprehensive Survey
Zhenheng Tang
Shaoshuai Shi
Wei Wang
Yue Liu
Xiaowen Chu
236
54
0
10 Mar 2020
Communication optimization strategies for distributed deep neural network training: A survey
Shuo Ouyang
Dezun Dong
Yemao Xu
Liquan Xiao
312
12
0
06 Mar 2020
Communication-Efficient Decentralized Learning with Sparsification and Adaptive Peer Selection
IEEE International Conference on Distributed Computing Systems (ICDCS), 2020
Zhenheng Tang
Shaoshuai Shi
Xiaowen Chu
FedML
154
68
0
22 Feb 2020
Communication-Efficient Edge AI: Algorithms and Systems
IEEE Communications Surveys and Tutorials (COMST), 2020
Yuanming Shi
Kai Yang
Tao Jiang
Jun Zhang
Khaled B. Letaief
GNN
193
401
0
22 Feb 2020
Intermittent Pulling with Local Compensation for Communication-Efficient Federated Learning
Yining Qi
Zhihao Qu
Song Guo
Xin Gao
Ruixuan Li
Baoliu Ye
FedML
143
9
0
22 Jan 2020
Adaptive Gradient Sparsification for Efficient Federated Learning: An Online Learning Approach
IEEE International Conference on Distributed Computing Systems (ICDCS), 2020
Pengchao Han
Maroun Touma
K. Leung
FedML
332
215
0
14 Jan 2020
Understanding Top-k Sparsification in Distributed Deep Learning
Shaoshuai Shi
Xiaowen Chu
Ka Chun Cheung
Simon See
341
115
0
20 Nov 2019
Layer-wise Adaptive Gradient Sparsification for Distributed Deep Learning with Convergence Guarantees
European Conference on Artificial Intelligence (ECAI), 2019
Shaoshuai Shi
Zhenheng Tang
Qiang-qiang Wang
Kaiyong Zhao
Xiaowen Chu
262
27
0
20 Nov 2019
On the Discrepancy between the Theoretical Analysis and Practical Implementations of Compressed Communication for Distributed Deep Learning
AAAI Conference on Artificial Intelligence (AAAI), 2019
Aritra Dutta
El Houcine Bergou
A. Abdelmoniem
Chen-Yu Ho
Atal Narayan Sahu
Marco Canini
Panos Kalnis
154
86
0
19 Nov 2019
Hyper-Sphere Quantization: Communication-Efficient SGD for Federated Learning
XINYAN DAI
Xiao Yan
Kaiwen Zhou
Han Yang
K. K. Ng
James Cheng
Yu Fan
FedML
151
49
0
12 Nov 2019
1
2
Next