ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.02367
  4. Cited By
Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification,
  and Local Computations
v1v2 (latest)

Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification, and Local Computations

IEEE Journal on Selected Areas in Information Theory (JSAIT), 2019
6 June 2019
Debraj Basu
Deepesh Data
C. Karakuş
Suhas Diggavi
    MQ
ArXiv (abs)PDFHTML

Papers citing "Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification, and Local Computations"

23 / 223 papers shown
Title
Breaking (Global) Barriers in Parallel Stochastic Optimization with Wait-Avoiding Group Averaging
Breaking (Global) Barriers in Parallel Stochastic Optimization with Wait-Avoiding Group AveragingIEEE Transactions on Parallel and Distributed Systems (TPDS), 2020
Shigang Li
Tal Ben-Nun
Giorgi Nadiradze
Salvatore Di Girolamo
Nikoli Dryden
Dan Alistarh
Torsten Hoefler
351
15
0
30 Apr 2020
Detached Error Feedback for Distributed SGD with Random Sparsification
Detached Error Feedback for Distributed SGD with Random SparsificationInternational Conference on Machine Learning (ICML), 2020
An Xu
Heng-Chiao Huang
163
12
0
11 Apr 2020
Dualize, Split, Randomize: Toward Fast Nonsmooth Optimization Algorithms
Dualize, Split, Randomize: Toward Fast Nonsmooth Optimization AlgorithmsJournal of Optimization Theory and Applications (JOTA), 2020
Adil Salim
Laurent Condat
Konstantin Mishchenko
Peter Richtárik
238
28
0
03 Apr 2020
A Unified Theory of Decentralized SGD with Changing Topology and Local
  Updates
A Unified Theory of Decentralized SGD with Changing Topology and Local UpdatesInternational Conference on Machine Learning (ICML), 2020
Anastasia Koloskova
Nicolas Loizou
Sadra Boreiri
Martin Jaggi
Sebastian U. Stich
FedML
459
577
0
23 Mar 2020
Communication-Efficient Distributed Deep Learning: A Comprehensive
  Survey
Communication-Efficient Distributed Deep Learning: A Comprehensive Survey
Zhenheng Tang
Shaoshuai Shi
Wei Wang
Yue Liu
Xiaowen Chu
218
54
0
10 Mar 2020
Communication-Efficient Distributed SGD with Error-Feedback, Revisited
Communication-Efficient Distributed SGD with Error-Feedback, RevisitedInternational Journal of Computational Intelligence Systems (IJCIS), 2020
T. Phuong
L. T. Phong
FedML
74
4
0
09 Mar 2020
Adaptive Federated Optimization
Adaptive Federated OptimizationInternational Conference on Learning Representations (ICLR), 2020
Sashank J. Reddi
Zachary B. Charles
Manzil Zaheer
Zachary Garrett
Keith Rush
Jakub Konecný
Sanjiv Kumar
H. B. McMahan
FedML
571
1,726
0
29 Feb 2020
Optimal Gradient Quantization Condition for Communication-Efficient
  Distributed Training
Optimal Gradient Quantization Condition for Communication-Efficient Distributed Training
An Xu
Zhouyuan Huo
Heng-Chiao Huang
MQ
108
6
0
25 Feb 2020
Stochastic-Sign SGD for Federated Learning with Theoretical Guarantees
Stochastic-Sign SGD for Federated Learning with Theoretical GuaranteesIEEE Transactions on Neural Networks and Learning Systems (IEEE TNNLS), 2020
Richeng Jin
Yufan Huang
Xiaofan He
H. Dai
Tianfu Wu
FedML
247
65
0
25 Feb 2020
Personalized Federated Learning: A Meta-Learning Approach
Personalized Federated Learning: A Meta-Learning Approach
Alireza Fallah
Aryan Mokhtari
Asuman Ozdaglar
FedML
510
639
0
19 Feb 2020
On the Communication Latency of Wireless Decentralized Learning
On the Communication Latency of Wireless Decentralized Learning
Navid Naderializadeh
112
3
0
10 Feb 2020
Differentially Quantized Gradient Methods
Differentially Quantized Gradient MethodsIEEE Transactions on Information Theory (IEEE Trans. Inf. Theory), 2020
Chung-Yi Lin
V. Kostina
B. Hassibi
MQ
290
8
0
06 Feb 2020
Elastic Consistency: A General Consistency Model for Distributed
  Stochastic Gradient Descent
Elastic Consistency: A General Consistency Model for Distributed Stochastic Gradient Descent
Giorgi Nadiradze
Ilia Markov
Bapi Chatterjee
Vyacheslav Kungurtsev
Dan Alistarh
FedML
211
14
0
16 Jan 2020
Adaptive Gradient Sparsification for Efficient Federated Learning: An
  Online Learning Approach
Adaptive Gradient Sparsification for Efficient Federated Learning: An Online Learning ApproachIEEE International Conference on Distributed Computing Systems (ICDCS), 2020
Pengchao Han
Maroun Touma
K. Leung
FedML
250
212
0
14 Jan 2020
Distributed Fixed Point Methods with Compressed Iterates
Distributed Fixed Point Methods with Compressed Iterates
Sélim Chraibi
Ahmed Khaled
D. Kovalev
Peter Richtárik
Adil Salim
Martin Takávc
FedML
136
18
0
20 Dec 2019
Advances and Open Problems in Federated Learning
Advances and Open Problems in Federated Learning
Peter Kairouz
H. B. McMahan
Brendan Avent
A. Bellet
M. Bennis
...
Zheng Xu
Qiang Yang
Felix X. Yu
Han Yu
Sen Zhao
FedMLAI4CE
531
7,389
0
10 Dec 2019
SPARQ-SGD: Event-Triggered and Compressed Communication in Decentralized
  Stochastic Optimization
SPARQ-SGD: Event-Triggered and Compressed Communication in Decentralized Stochastic Optimization
Navjot Singh
Deepesh Data
Jemin George
Suhas Diggavi
156
24
0
31 Oct 2019
SCAFFOLD: Stochastic Controlled Averaging for Federated Learning
SCAFFOLD: Stochastic Controlled Averaging for Federated Learning
Sai Praneeth Karimireddy
Satyen Kale
M. Mohri
Sashank J. Reddi
Sebastian U. Stich
A. Suresh
FedML
267
383
0
14 Oct 2019
Tighter Theory for Local SGD on Identical and Heterogeneous Data
Tighter Theory for Local SGD on Identical and Heterogeneous DataInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2019
Ahmed Khaled
Konstantin Mishchenko
Peter Richtárik
347
457
0
10 Sep 2019
Gradient Descent with Compressed Iterates
Gradient Descent with Compressed Iterates
Ahmed Khaled
Peter Richtárik
160
25
0
10 Sep 2019
First Analysis of Local GD on Heterogeneous Data
First Analysis of Local GD on Heterogeneous Data
Ahmed Khaled
Konstantin Mishchenko
Peter Richtárik
FedML
199
178
0
10 Sep 2019
RATQ: A Universal Fixed-Length Quantizer for Stochastic Optimization
RATQ: A Universal Fixed-Length Quantizer for Stochastic OptimizationIEEE Transactions on Information Theory (IEEE Trans. Inf. Theory), 2019
Prathamesh Mayekar
Himanshu Tyagi
MQ
244
50
0
22 Aug 2019
Global Momentum Compression for Sparse Communication in Distributed
  Learning
Global Momentum Compression for Sparse Communication in Distributed Learning
Chang-Wei Shi
Shen-Yi Zhao
Yin-Peng Xie
Hao Gao
Wu-Jun Li
253
1
0
30 May 2019
Previous
12345