Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1901.04359
Cited By
A Distributed Synchronous SGD Algorithm with Global Top-
k
k
k
Sparsification for Low Bandwidth Networks
14 January 2019
S. Shi
Qiang-qiang Wang
Kaiyong Zhao
Zhenheng Tang
Yuxin Wang
Xiang Huang
Xiaowen Chu
Re-assign community
ArXiv
PDF
HTML
Papers citing
"A Distributed Synchronous SGD Algorithm with Global Top-$k$ Sparsification for Low Bandwidth Networks"
16 / 66 papers shown
Title
Asynchronous Federated Learning with Reduced Number of Rounds and with Differential Privacy from Less Aggregated Gaussian Noise
Marten van Dijk
Nhuong V. Nguyen
Toan N. Nguyen
Lam M. Nguyen
Quoc Tran-Dinh
Phuong Ha Nguyen
FedML
6
28
0
17 Jul 2020
Efficient Sparse-Dense Matrix-Matrix Multiplication on GPUs Using the Customized Sparse Storage Format
S. Shi
Qiang-qiang Wang
X. Chu
9
10
0
29 May 2020
A Quantitative Survey of Communication Optimizations in Distributed Deep Learning
S. Shi
Zhenheng Tang
X. Chu
Chengjian Liu
Wei Wang
Bo Li
GNN
AI4CE
10
3
0
27 May 2020
A flexible framework for communication-efficient machine learning: from HPC to IoT
Sarit Khirirat
Sindri Magnússon
Arda Aytekin
M. Johansson
12
7
0
13 Mar 2020
Communication-Efficient Distributed Deep Learning: A Comprehensive Survey
Zhenheng Tang
S. Shi
Wei Wang
Bo-wen Li
Xiaowen Chu
17
48
0
10 Mar 2020
Communication Contention Aware Scheduling of Multiple Deep Learning Training Jobs
Qiang-qiang Wang
S. Shi
Canhui Wang
X. Chu
14
13
0
24 Feb 2020
Communication-Efficient Decentralized Learning with Sparsification and Adaptive Peer Selection
Zhenheng Tang
S. Shi
X. Chu
FedML
6
57
0
22 Feb 2020
MDLdroid: a ChainSGD-reduce Approach to Mobile Deep Learning for Personal Mobile Sensing
Yu Zhang
Tao Gu
Xi Zhang
FedML
12
20
0
07 Feb 2020
Adaptive Gradient Sparsification for Efficient Federated Learning: An Online Learning Approach
Pengchao Han
Shiqiang Wang
K. Leung
FedML
20
175
0
14 Jan 2020
MG-WFBP: Merging Gradients Wisely for Efficient Communication in Distributed Deep Learning
S. Shi
X. Chu
Bo Li
FedML
18
25
0
18 Dec 2019
Understanding Top-k Sparsification in Distributed Deep Learning
S. Shi
X. Chu
Ka Chun Cheung
Simon See
14
93
0
20 Nov 2019
Layer-wise Adaptive Gradient Sparsification for Distributed Deep Learning with Convergence Guarantees
S. Shi
Zhenheng Tang
Qiang-qiang Wang
Kaiyong Zhao
X. Chu
6
22
0
20 Nov 2019
Model Pruning Enables Efficient Federated Learning on Edge Devices
Yuang Jiang
Shiqiang Wang
Victor Valls
Bongjun Ko
Wei-Han Lee
Kin K. Leung
Leandros Tassiulas
17
442
0
26 Sep 2019
Accelerated Sparsified SGD with Error Feedback
Tomoya Murata
Taiji Suzuki
6
2
0
29 May 2019
MG-WFBP: Efficient Data Communication for Distributed Synchronous SGD Algorithms
S. Shi
X. Chu
Bo Li
FedML
11
89
0
27 Nov 2018
Stochastic Nonconvex Optimization with Large Minibatches
Weiran Wang
Nathan Srebro
34
26
0
25 Sep 2017
Previous
1
2