Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1610.02132
Cited By
QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding
7 October 2016
Dan Alistarh
Demjan Grubic
Jerry Li
Ryota Tomioka
Milan Vojnović
MQ
Re-assign community
ArXiv
PDF
HTML
Papers citing
"QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding"
32 / 82 papers shown
Title
Local Stochastic Gradient Descent Ascent: Convergence Analysis and Communication Efficiency
Yuyang Deng
M. Mahdavi
16
58
0
25 Feb 2021
MARINA: Faster Non-Convex Distributed Learning with Compression
Eduard A. Gorbunov
Konstantin Burlachenko
Zhize Li
Peter Richtárik
28
108
0
15 Feb 2021
Communication-efficient Distributed Cooperative Learning with Compressed Beliefs
Taha Toghani
César A. Uribe
14
15
0
14 Feb 2021
Sparse-Push: Communication- & Energy-Efficient Decentralized Distributed Learning over Directed & Time-Varying Graphs with non-IID Datasets
Sai Aparna Aketi
Amandeep Singh
J. Rabaey
15
10
0
10 Feb 2021
Federated Learning over Wireless Device-to-Device Networks: Algorithms and Convergence Analysis
Hong Xing
Osvaldo Simeone
Suzhi Bi
37
92
0
29 Jan 2021
Activation Density based Mixed-Precision Quantization for Energy Efficient Neural Networks
Karina Vasquez
Yeshwanth Venkatesha
Abhiroop Bhattacharjee
Abhishek Moitra
Priyadarshini Panda
MQ
35
15
0
12 Jan 2021
CatFedAvg: Optimising Communication-efficiency and Classification Accuracy in Federated Learning
D. Sarkar
Sumit Rai
Ankur Narang
FedML
18
2
0
14 Nov 2020
Coded Computing for Low-Latency Federated Learning over Wireless Edge Networks
Saurav Prakash
S. Dhakal
M. Akdeniz
Yair Yona
S. Talwar
Salman Avestimehr
N. Himayat
FedML
6
92
0
12 Nov 2020
Improving Neural Network Training in Low Dimensional Random Bases
Frithjof Gressmann
Zach Eaton-Rosen
Carlo Luschi
19
28
0
09 Nov 2020
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Filip Hanzely
19
0
0
26 Aug 2020
Federated Learning for Channel Estimation in Conventional and RIS-Assisted Massive MIMO
Ahmet M. Elbir
Sinem Coleri
26
129
0
25 Aug 2020
Communication-Efficient and Distributed Learning Over Wireless Networks: Principles and Applications
Jihong Park
S. Samarakoon
Anis Elgabli
Joongheon Kim
M. Bennis
Seong-Lyun Kim
Mérouane Debbah
23
161
0
06 Aug 2020
Byzantine-Resilient Secure Federated Learning
Jinhyun So
Başak Güler
A. Avestimehr
FedML
11
236
0
21 Jul 2020
Incentives for Federated Learning: a Hypothesis Elicitation Approach
Yang Liu
Jiaheng Wei
FedML
27
21
0
21 Jul 2020
Is Network the Bottleneck of Distributed Training?
Zhen Zhang
Chaokun Chang
Haibin Lin
Yida Wang
R. Arora
Xin Jin
17
69
0
17 Jun 2020
Communication-Efficient Gradient Coding for Straggler Mitigation in Distributed Learning
S. Kadhe
O. O. Koyluoglu
K. Ramchandran
16
11
0
14 May 2020
Detached Error Feedback for Distributed SGD with Random Sparsification
An Xu
Heng-Chiao Huang
31
9
0
11 Apr 2020
A Robust Gradient Tracking Method for Distributed Optimization over Directed Networks
Shi Pu
19
38
0
31 Mar 2020
Dynamic Sampling and Selective Masking for Communication-Efficient Federated Learning
Shaoxiong Ji
Wenqi Jiang
A. Walid
Xue Li
FedML
28
66
0
21 Mar 2020
A Compressive Sensing Approach for Federated Learning over Massive MIMO Communication Systems
Yo-Seb Jeon
M. Amiri
Jun Li
H. Vincent Poor
14
9
0
18 Mar 2020
Distributed Training of Deep Neural Network Acoustic Models for Automatic Speech Recognition
Xiaodong Cui
Wei Zhang
Ulrich Finkler
G. Saon
M. Picheny
David S. Kung
12
19
0
24 Feb 2020
Uncertainty Principle for Communication Compression in Distributed and Federated Learning and the Search for an Optimal Compressor
M. Safaryan
Egor Shulgin
Peter Richtárik
24
60
0
20 Feb 2020
Towards Sharper First-Order Adversary with Quantized Gradients
Zhuanghua Liu
Ivor W. Tsang
AAML
6
0
0
01 Feb 2020
One-Bit Over-the-Air Aggregation for Communication-Efficient Federated Edge Learning: Design and Convergence Analysis
Guangxu Zhu
Yuqing Du
Deniz Gunduz
Kaibin Huang
18
308
0
16 Jan 2020
MG-WFBP: Merging Gradients Wisely for Efficient Communication in Distributed Deep Learning
S. Shi
X. Chu
Bo Li
FedML
20
25
0
18 Dec 2019
Gradient Descent with Compressed Iterates
Ahmed Khaled
Peter Richtárik
14
22
0
10 Sep 2019
Gradient Coding with Clustering and Multi-message Communication
Emre Ozfatura
Deniz Gunduz
S. Ulukus
14
38
0
05 Mar 2019
cpSGD: Communication-efficient and differentially-private distributed SGD
Naman Agarwal
A. Suresh
Felix X. Yu
Sanjiv Kumar
H. B. McMahan
FedML
8
484
0
27 May 2018
Double Quantization for Communication-Efficient Distributed Optimization
Yue Yu
Jiaxiang Wu
Longbo Huang
MQ
8
57
0
25 May 2018
Local SGD Converges Fast and Communicates Little
Sebastian U. Stich
FedML
12
1,042
0
24 May 2018
Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication
Felix Sattler
Simon Wiedemann
K. Müller
Wojciech Samek
MQ
11
210
0
22 May 2018
Gradient Sparsification for Communication-Efficient Distributed Optimization
Jianqiao Wangni
Jialei Wang
Ji Liu
Tong Zhang
13
521
0
26 Oct 2017
Previous
1
2