ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.02367
  4. Cited By
Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification,
  and Local Computations
v1v2 (latest)

Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification, and Local Computations

IEEE Journal on Selected Areas in Information Theory (JSAIT), 2019
6 June 2019
Debraj Basu
Deepesh Data
C. Karakuş
Suhas Diggavi
    MQ
ArXiv (abs)PDFHTML

Papers citing "Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification, and Local Computations"

50 / 223 papers shown
Title
On Large-Cohort Training for Federated Learning
On Large-Cohort Training for Federated LearningNeural Information Processing Systems (NeurIPS), 2021
Zachary B. Charles
Zachary Garrett
Zhouyuan Huo
Sergei Shmulyian
Virginia Smith
FedML
194
117
0
15 Jun 2021
CFedAvg: Achieving Efficient Communication and Fast Convergence in
  Non-IID Federated Learning
CFedAvg: Achieving Efficient Communication and Fast Convergence in Non-IID Federated LearningInternational Symposium on Modeling and Optimization in Mobile, Ad-Hoc and Wireless Networks (WiOpt), 2021
Haibo Yang
Jia Liu
Elizabeth S. Bentley
FedML
77
22
0
14 Jun 2021
EF21: A New, Simpler, Theoretically Better, and Practically Faster Error
  Feedback
EF21: A New, Simpler, Theoretically Better, and Practically Faster Error FeedbackNeural Information Processing Systems (NeurIPS), 2021
Peter Richtárik
Igor Sokolov
Ilyas Fatkhullin
164
172
0
09 Jun 2021
Fast Federated Learning in the Presence of Arbitrary Device
  Unavailability
Fast Federated Learning in the Presence of Arbitrary Device UnavailabilityNeural Information Processing Systems (NeurIPS), 2021
Xinran Gu
Kaixuan Huang
Jingzhao Zhang
Longbo Huang
FedML
140
119
0
08 Jun 2021
MURANA: A Generic Framework for Stochastic Variance-Reduced Optimization
MURANA: A Generic Framework for Stochastic Variance-Reduced OptimizationMathematical and Scientific Machine Learning (MSML), 2021
Laurent Condat
Peter Richtárik
219
20
0
06 Jun 2021
Neural Distributed Source Coding
Neural Distributed Source CodingIEEE Journal on Selected Areas in Information Theory (JSAIT), 2021
Jay Whang
Alliot Nagle
Anish Acharya
Hyeji Kim
A. Dimakis
306
24
0
05 Jun 2021
Compressed Communication for Distributed Training: Adaptive Methods and
  System
Compressed Communication for Distributed Training: Adaptive Methods and System
Yuchen Zhong
Cong Xie
Shuai Zheng
Yanghua Peng
127
9
0
17 May 2021
OCTOPUS: Overcoming Performance andPrivatization Bottlenecks in
  Distributed Learning
OCTOPUS: Overcoming Performance andPrivatization Bottlenecks in Distributed LearningIEEE Transactions on Parallel and Distributed Systems (TPDS), 2021
Shuo Wang
Surya Nepal
Kristen Moore
M. Grobler
Carsten Rudolph
A. Abuadbba
FedML
144
8
0
03 May 2021
Communication-Efficient Federated Learning with Dual-Side Low-Rank
  Compression
Communication-Efficient Federated Learning with Dual-Side Low-Rank Compression
Zhefeng Qiao
Xianghao Yu
Jun Zhang
Khaled B. Letaief
FedML
249
24
0
26 Apr 2021
1-bit LAMB: Communication Efficient Large-Scale Large-Batch Training
  with LAMB's Convergence Speed
1-bit LAMB: Communication Efficient Large-Scale Large-Batch Training with LAMB's Convergence SpeedInternational Conference on High Performance Computing (HiPC), 2021
Conglong Li
A. A. Awan
Hanlin Tang
Samyam Rajbhandari
Yuxiong He
347
34
0
13 Apr 2021
Communication-Efficient Agnostic Federated Averaging
Communication-Efficient Agnostic Federated AveragingInterspeech (Interspeech), 2021
Jae Hun Ro
Mingqing Chen
Rajiv Mathews
M. Mohri
A. Suresh
FedML
236
17
0
06 Apr 2021
MergeComp: A Compression Scheduler for Scalable Communication-Efficient
  Distributed Training
MergeComp: A Compression Scheduler for Scalable Communication-Efficient Distributed Training
Zhuang Wang
X. Wu
T. Ng
GNN
102
4
0
28 Mar 2021
Learned Gradient Compression for Distributed Deep Learning
Learned Gradient Compression for Distributed Deep LearningIEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2021
L. Abrahamyan
Yiming Chen
Giannis Bekoulis
Nikos Deligiannis
240
58
0
16 Mar 2021
EventGraD: Event-Triggered Communication in Parallel Machine Learning
EventGraD: Event-Triggered Communication in Parallel Machine LearningNeurocomputing (Neurocomputing), 2021
Soumyadip Ghosh
B. Aquino
V. Gupta
FedML
226
9
0
12 Mar 2021
Convergence and Accuracy Trade-Offs in Federated Learning and
  Meta-Learning
Convergence and Accuracy Trade-Offs in Federated Learning and Meta-LearningInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2021
Zachary B. Charles
Jakub Konecný
FedML
178
77
0
08 Mar 2021
Personalized Federated Learning using Hypernetworks
Personalized Federated Learning using HypernetworksInternational Conference on Machine Learning (ICML), 2021
Aviv Shamsian
Aviv Navon
Ethan Fetaya
Gal Chechik
FedML
337
406
0
08 Mar 2021
Moshpit SGD: Communication-Efficient Decentralized Training on
  Heterogeneous Unreliable Devices
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable DevicesNeural Information Processing Systems (NeurIPS), 2021
Max Ryabinin
Eduard A. Gorbunov
Vsevolod Plokhotnyuk
Gennady Pekhimenko
304
43
0
04 Mar 2021
Wirelessly Powered Federated Edge Learning: Optimal Tradeoffs Between
  Convergence and Power Transfer
Wirelessly Powered Federated Edge Learning: Optimal Tradeoffs Between Convergence and Power TransferIEEE Transactions on Wireless Communications (IEEE TWC), 2021
Qunsong Zeng
Yuqing Du
Kaibin Huang
227
42
0
24 Feb 2021
QuPeL: Quantized Personalization with Applications to Federated Learning
QuPeL: Quantized Personalization with Applications to Federated Learning
Kaan Ozkara
Navjot Singh
Deepesh Data
Suhas Diggavi
FedML
174
5
0
23 Feb 2021
MARINA: Faster Non-Convex Distributed Learning with Compression
MARINA: Faster Non-Convex Distributed Learning with CompressionInternational Conference on Machine Learning (ICML), 2021
Eduard A. Gorbunov
Konstantin Burlachenko
Zhize Li
Peter Richtárik
278
121
0
15 Feb 2021
DeepReduce: A Sparse-tensor Communication Framework for Distributed Deep
  Learning
DeepReduce: A Sparse-tensor Communication Framework for Distributed Deep Learning
Kelly Kostopoulou
Hang Xu
Aritra Dutta
Xin Li
A. Ntoulas
Panos Kalnis
91
7
0
05 Feb 2021
1-bit Adam: Communication Efficient Large-Scale Training with Adam's
  Convergence Speed
1-bit Adam: Communication Efficient Large-Scale Training with Adam's Convergence SpeedInternational Conference on Machine Learning (ICML), 2021
Hanlin Tang
Shaoduo Gan
A. A. Awan
Samyam Rajbhandari
Conglong Li
Xiangru Lian
Ji Liu
Ce Zhang
Yuxiong He
AI4CE
255
97
0
04 Feb 2021
Federated Learning over Wireless Device-to-Device Networks: Algorithms
  and Convergence Analysis
Federated Learning over Wireless Device-to-Device Networks: Algorithms and Convergence AnalysisIEEE Journal on Selected Areas in Communications (JSAC), 2021
Hong Xing
Osvaldo Simeone
Suzhi Bi
246
107
0
29 Jan 2021
To Talk or to Work: Flexible Communication Compression for Energy
  Efficient Federated Learning over Heterogeneous Mobile Edge Devices
To Talk or to Work: Flexible Communication Compression for Energy Efficient Federated Learning over Heterogeneous Mobile Edge DevicesIEEE Conference on Computer Communications (INFOCOM), 2020
Liang Li
Dian Shi
Ronghui Hou
Hui Li
Miao Pan
Zhu Han
FedML
147
170
0
22 Dec 2020
Quantizing data for distributed learning
Quantizing data for distributed learningIEEE Journal on Selected Areas in Information Theory (JSAIT), 2020
Osama A. Hanna
Yahya H. Ezzeldin
Christina Fragouli
Suhas Diggavi
FedML
315
24
0
14 Dec 2020
Towards Communication-efficient and Attack-Resistant Federated Edge
  Learning for Industrial Internet of Things
Towards Communication-efficient and Attack-Resistant Federated Edge Learning for Industrial Internet of Things
Yi Liu
Ruihui Zhao
Jiawen Kang
A. Yassine
Dusit Niyato
Jia-Jie Peng
FedML
188
40
0
08 Dec 2020
Faster Non-Convex Federated Learning via Global and Local Momentum
Faster Non-Convex Federated Learning via Global and Local Momentum
Rudrajit Das
Anish Acharya
Abolfazl Hashemi
Sujay Sanghavi
Inderjit S. Dhillon
Ufuk Topcu
FedML
433
91
0
07 Dec 2020
Wyner-Ziv Estimators for Distributed Mean Estimation with Side
  Information and Optimization
Wyner-Ziv Estimators for Distributed Mean Estimation with Side Information and OptimizationIEEE Transactions on Information Theory (IEEE Trans. Inf. Theory), 2020
Prathamesh Mayekar
Shubham K. Jha
A. Suresh
Himanshu Tyagi
FedML
241
2
0
24 Nov 2020
On the Benefits of Multiple Gossip Steps in Communication-Constrained
  Decentralized Optimization
On the Benefits of Multiple Gossip Steps in Communication-Constrained Decentralized Optimization
Abolfazl Hashemi
Anish Acharya
Rudrajit Das
H. Vikalo
Sujay Sanghavi
Inderjit Dhillon
231
9
0
20 Nov 2020
Local SGD: Unified Theory and New Efficient Methods
Local SGD: Unified Theory and New Efficient Methods
Eduard A. Gorbunov
Filip Hanzely
Peter Richtárik
FedML
196
118
0
03 Nov 2020
Optimal Client Sampling for Federated Learning
Optimal Client Sampling for Federated Learning
Jiajun He
Samuel Horváth
Peter Richtárik
FedML
281
223
0
26 Oct 2020
Linearly Converging Error Compensated SGD
Linearly Converging Error Compensated SGDNeural Information Processing Systems (NeurIPS), 2020
Eduard A. Gorbunov
D. Kovalev
Dmitry Makarenko
Peter Richtárik
303
84
0
23 Oct 2020
Towards Tight Communication Lower Bounds for Distributed Optimisation
Towards Tight Communication Lower Bounds for Distributed OptimisationNeural Information Processing Systems (NeurIPS), 2020
Dan Alistarh
Janne H. Korhonen
FedML
127
10
0
16 Oct 2020
Optimal Gradient Compression for Distributed and Federated Learning
Optimal Gradient Compression for Distributed and Federated Learning
Alyazeed Albasyoni
M. Safaryan
Laurent Condat
Peter Richtárik
FedML
131
70
0
07 Oct 2020
Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge
  Computing
Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge ComputingIEEE Internet of Things Journal (IEEE IoT J.), 2020
Hao Chen
Yu Ye
Ming Xiao
Mikael Skoglund
H. Vincent Poor
109
17
0
02 Oct 2020
APMSqueeze: A Communication Efficient Adam-Preconditioned Momentum SGD
  Algorithm
APMSqueeze: A Communication Efficient Adam-Preconditioned Momentum SGD Algorithm
Hanlin Tang
Shaoduo Gan
Samyam Rajbhandari
Xiangru Lian
Ji Liu
Yuxiong He
Ce Zhang
157
8
0
26 Aug 2020
Shuffled Model of Federated Learning: Privacy, Communication and
  Accuracy Trade-offs
Shuffled Model of Federated Learning: Privacy, Communication and Accuracy Trade-offs
Antonious M. Girgis
Deepesh Data
Suhas Diggavi
Peter Kairouz
A. Suresh
FedML
175
26
0
17 Aug 2020
Step-Ahead Error Feedback for Distributed Training with Compressed
  Gradient
Step-Ahead Error Feedback for Distributed Training with Compressed Gradient
An Xu
Zhouyuan Huo
Heng-Chiao Huang
208
15
0
13 Aug 2020
FedSKETCH: Communication-Efficient and Private Federated Learning via
  Sketching
FedSKETCH: Communication-Efficient and Private Federated Learning via Sketching
Farzin Haddadpour
Belhal Karimi
Ping Li
Xiaoyun Li
FedML
132
35
0
11 Aug 2020
CSER: Communication-efficient SGD with Error Reset
CSER: Communication-efficient SGD with Error ResetNeural Information Processing Systems (NeurIPS), 2020
Cong Xie
Shuai Zheng
Oluwasanmi Koyejo
Indranil Gupta
Mu Li
Yanghua Peng
202
42
0
26 Jul 2020
Tackling the Objective Inconsistency Problem in Heterogeneous Federated
  Optimization
Tackling the Objective Inconsistency Problem in Heterogeneous Federated OptimizationNeural Information Processing Systems (NeurIPS), 2020
Jianyu Wang
Qinghua Liu
Hao Liang
Gauri Joshi
H. Vincent Poor
MoMeFedML
600
1,673
0
15 Jul 2020
Federated Learning with Compression: Unified Analysis and Sharp
  Guarantees
Federated Learning with Compression: Unified Analysis and Sharp Guarantees
Farzin Haddadpour
Mohammad Mahdi Kamani
Aryan Mokhtari
M. Mahdavi
FedML
403
307
0
02 Jul 2020
On the Outsized Importance of Learning Rates in Local Update Methods
On the Outsized Importance of Learning Rates in Local Update Methods
Zachary B. Charles
Jakub Konecný
FedML
182
56
0
02 Jul 2020
Byzantine-Resilient High-Dimensional Federated Learning
Byzantine-Resilient High-Dimensional Federated Learning
Deepesh Data
Suhas Diggavi
FedMLAAML
135
47
0
22 Jun 2020
A Better Alternative to Error Feedback for Communication-Efficient
  Distributed Learning
A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning
Samuel Horváth
Peter Richtárik
201
64
0
19 Jun 2020
Federated Accelerated Stochastic Gradient Descent
Federated Accelerated Stochastic Gradient Descent
Honglin Yuan
Tengyu Ma
FedML
391
195
0
16 Jun 2020
rTop-k: A Statistical Estimation Approach to Distributed SGD
rTop-k: A Statistical Estimation Approach to Distributed SGD
L. P. Barnes
Huseyin A. Inan
Berivan Isik
Ayfer Özgür
196
67
0
21 May 2020
Byzantine-Resilient SGD in High Dimensions on Heterogeneous Data
Byzantine-Resilient SGD in High Dimensions on Heterogeneous Data
Deepesh Data
Suhas Diggavi
FedML
136
43
0
16 May 2020
SQuARM-SGD: Communication-Efficient Momentum SGD for Decentralized
  Optimization
SQuARM-SGD: Communication-Efficient Momentum SGD for Decentralized Optimization
Navjot Singh
Deepesh Data
Jemin George
Suhas Diggavi
249
60
0
13 May 2020
Communication-Efficient Distributed Stochastic AUC Maximization with
  Deep Neural Networks
Communication-Efficient Distributed Stochastic AUC Maximization with Deep Neural NetworksInternational Conference on Machine Learning (ICML), 2020
Zhishuai Guo
Mingrui Liu
Zhuoning Yuan
Li Shen
Wei Liu
Tianbao Yang
183
44
0
05 May 2020
Previous
12345
Next