ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.09269
  4. Cited By
Distributed Learning with Compressed Gradient Differences
v1v2v3 (latest)

Distributed Learning with Compressed Gradient Differences

26 January 2019
Konstantin Mishchenko
Eduard A. Gorbunov
Martin Takáč
Peter Richtárik
ArXiv (abs)PDFHTML

Papers citing "Distributed Learning with Compressed Gradient Differences"

27 / 77 papers shown
Title
MURANA: A Generic Framework for Stochastic Variance-Reduced Optimization
MURANA: A Generic Framework for Stochastic Variance-Reduced Optimization
Laurent Condat
Peter Richtárik
75
19
0
06 Jun 2021
FedNL: Making Newton-Type Methods Applicable to Federated Learning
FedNL: Making Newton-Type Methods Applicable to Federated Learning
M. Safaryan
Rustem Islamov
Xun Qian
Peter Richtárik
FedML
89
80
0
05 Jun 2021
Slashing Communication Traffic in Federated Learning by Transmitting
  Clustered Model Updates
Slashing Communication Traffic in Federated Learning by Transmitting Clustered Model Updates
Laizhong Cui
Xiaoxin Su
Yipeng Zhou
Yi Pan
FedML
73
37
0
10 May 2021
BROADCAST: Reducing Both Stochastic and Compression Noise to Robustify
  Communication-Efficient Federated Learning
BROADCAST: Reducing Both Stochastic and Compression Noise to Robustify Communication-Efficient Federated Learning
He Zhu
Qing Ling
FedMLAAML
146
13
0
14 Apr 2021
Moshpit SGD: Communication-Efficient Decentralized Training on
  Heterogeneous Unreliable Devices
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices
Max Ryabinin
Eduard A. Gorbunov
Vsevolod Plokhotnyuk
Gennady Pekhimenko
131
35
0
04 Mar 2021
IntSGD: Adaptive Floatless Compression of Stochastic Gradients
IntSGD: Adaptive Floatless Compression of Stochastic Gradients
Konstantin Mishchenko
Bokun Wang
D. Kovalev
Peter Richtárik
100
15
0
16 Feb 2021
MARINA: Faster Non-Convex Distributed Learning with Compression
MARINA: Faster Non-Convex Distributed Learning with Compression
Eduard A. Gorbunov
Konstantin Burlachenko
Zhize Li
Peter Richtárik
100
110
0
15 Feb 2021
Smoothness Matrices Beat Smoothness Constants: Better Communication
  Compression Techniques for Distributed Optimization
Smoothness Matrices Beat Smoothness Constants: Better Communication Compression Techniques for Distributed Optimization
M. Safaryan
Filip Hanzely
Peter Richtárik
42
24
0
14 Feb 2021
Distributed Second Order Methods with Fast Rates and Compressed
  Communication
Distributed Second Order Methods with Fast Rates and Compressed Communication
Rustem Islamov
Xun Qian
Peter Richtárik
86
51
0
14 Feb 2021
Linear Convergence in Federated Learning: Tackling Client Heterogeneity
  and Sparse Gradients
Linear Convergence in Federated Learning: Tackling Client Heterogeneity and Sparse Gradients
A. Mitra
Rayana H. Jaafar
George J. Pappas
Hamed Hassani
FedML
112
160
0
14 Feb 2021
Straggler-Resilient Federated Learning: Leveraging the Interplay Between
  Statistical Accuracy and System Heterogeneity
Straggler-Resilient Federated Learning: Leveraging the Interplay Between Statistical Accuracy and System Heterogeneity
Amirhossein Reisizadeh
Isidoros Tziotis
Hamed Hassani
Aryan Mokhtari
Ramtin Pedarsani
FedML
231
102
0
28 Dec 2020
Recent Theoretical Advances in Non-Convex Optimization
Recent Theoretical Advances in Non-Convex Optimization
Marina Danilova
Pavel Dvurechensky
Alexander Gasnikov
Eduard A. Gorbunov
Sergey Guminov
Dmitry Kamzolov
Innokentiy Shibaev
129
79
0
11 Dec 2020
Local SGD: Unified Theory and New Efficient Methods
Local SGD: Unified Theory and New Efficient Methods
Eduard A. Gorbunov
Filip Hanzely
Peter Richtárik
FedML
93
111
0
03 Nov 2020
Linearly Converging Error Compensated SGD
Linearly Converging Error Compensated SGD
Eduard A. Gorbunov
D. Kovalev
Dmitry Makarenko
Peter Richtárik
214
78
0
23 Oct 2020
Optimal Gradient Compression for Distributed and Federated Learning
Optimal Gradient Compression for Distributed and Federated Learning
Alyazeed Albasyoni
M. Safaryan
Laurent Condat
Peter Richtárik
FedML
67
64
0
07 Oct 2020
On Communication Compression for Distributed Optimization on
  Heterogeneous Data
On Communication Compression for Distributed Optimization on Heterogeneous Data
Sebastian U. Stich
93
23
0
04 Sep 2020
Linear Convergent Decentralized Optimization with Compression
Linear Convergent Decentralized Optimization with Compression
Xiaorui Liu
Yao Li
Rongrong Wang
Jiliang Tang
Ming Yan
65
46
0
01 Jul 2020
Bidirectional compression in heterogeneous settings for distributed or
  federated learning with partial participation: tight convergence guarantees
Bidirectional compression in heterogeneous settings for distributed or federated learning with partial participation: tight convergence guarantees
Constantin Philippenko
Aymeric Dieuleveut
FedML
89
51
0
25 Jun 2020
A Better Alternative to Error Feedback for Communication-Efficient
  Distributed Learning
A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning
Samuel Horváth
Peter Richtárik
79
60
0
19 Jun 2020
Federated Accelerated Stochastic Gradient Descent
Federated Accelerated Stochastic Gradient Descent
Honglin Yuan
Tengyu Ma
FedML
100
180
0
16 Jun 2020
Detached Error Feedback for Distributed SGD with Random Sparsification
Detached Error Feedback for Distributed SGD with Random Sparsification
An Xu
Heng-Chiao Huang
71
9
0
11 Apr 2020
On Biased Compression for Distributed Learning
On Biased Compression for Distributed Learning
Aleksandr Beznosikov
Samuel Horváth
Peter Richtárik
M. Safaryan
75
189
0
27 Feb 2020
Differentially Quantized Gradient Methods
Differentially Quantized Gradient Methods
Chung-Yi Lin
V. Kostina
B. Hassibi
MQ
66
8
0
06 Feb 2020
Distributed Fixed Point Methods with Compressed Iterates
Distributed Fixed Point Methods with Compressed Iterates
Sélim Chraibi
Ahmed Khaled
D. Kovalev
Peter Richtárik
Adil Salim
Martin Takávc
FedML
47
17
0
20 Dec 2019
Gradient Descent with Compressed Iterates
Gradient Descent with Compressed Iterates
Ahmed Khaled
Peter Richtárik
41
22
0
10 Sep 2019
A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and
  Coordinate Descent
A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent
Eduard A. Gorbunov
Filip Hanzely
Peter Richtárik
111
147
0
27 May 2019
Natural Compression for Distributed Deep Learning
Natural Compression for Distributed Deep Learning
Samuel Horváth
Chen-Yu Ho
L. Horvath
Atal Narayan Sahu
Marco Canini
Peter Richtárik
91
152
0
27 May 2019
Previous
12