Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1901.09269
Cited By
v1
v2
v3 (latest)
Distributed Learning with Compressed Gradient Differences
26 January 2019
Konstantin Mishchenko
Eduard A. Gorbunov
Martin Takáč
Peter Richtárik
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Distributed Learning with Compressed Gradient Differences"
50 / 77 papers shown
Title
Event-Driven Online Vertical Federated Learning
Ganyu Wang
Boyu Wang
Bin Gu
Charles Ling
12
1
0
17 Jun 2025
Tight analyses of first-order methods with error feedback
Daniel Berg Thomsen
Adrien B. Taylor
Aymeric Dieuleveut
91
1
0
05 Jun 2025
On the Interaction of Noise, Compression Role, and Adaptivity under
(
L
0
,
L
1
)
(L_0, L_1)
(
L
0
,
L
1
)
-Smoothness: An SDE-based Approach
Enea Monzio Compagnoni
Rustem Islamov
Antonio Orvieto
Eduard A. Gorbunov
13
1
0
30 May 2025
Accelerated Training of Federated Learning via Second-Order Methods
Mrinmay Sen
Sidhant R Nair
C Krishna Mohan
FedML
38
0
0
29 May 2025
Coded Robust Aggregation for Distributed Learning under Byzantine Attacks
Chengxi Li
Ming Xiao
Mikael Skoglund
AAML
OOD
12
0
0
17 May 2025
Accelerated Distributed Optimization with Compression and Error Feedback
Yuan Gao
Anton Rodomanov
Jeremy Rack
Sebastian U. Stich
89
0
0
11 Mar 2025
Communication-efficient Vertical Federated Learning via Compressed Error Feedback
Pedro Valdeira
João Xavier
Cláudia Soares
Yuejie Chi
FedML
121
4
0
20 Jun 2024
Inexact subgradient methods for semialgebraic functions
Jérôme Bolte
Tam Le
Éric Moulines
Edouard Pauwels
103
2
0
30 Apr 2024
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression
Laurent Condat
Artavazd Maranjyan
Peter Richtárik
123
5
0
07 Mar 2024
Rendering Wireless Environments Useful for Gradient Estimators: A Zero-Order Stochastic Federated Learning Method
Elissa Mhanna
Mohamad Assaad
144
1
0
30 Jan 2024
Kimad: Adaptive Gradient Compression with Bandwidth Awareness
Jihao Xin
Ivan Ilin
Shunkang Zhang
Marco Canini
Peter Richtárik
71
3
0
13 Dec 2023
Communication-Efficient Heterogeneous Federated Learning with Generalized Heavy-Ball Momentum
Riccardo Zaccone
Sai Praneeth Karimireddy
Carlo Masone
Marco Ciccone
FedML
108
2
0
30 Nov 2023
Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates
Ahmad Rammal
Kaja Gruntkowska
Nikita Fedin
Eduard A. Gorbunov
Peter Richtárik
85
5
0
15 Oct 2023
Adaptive Self-Distillation for Minimizing Client Drift in Heterogeneous Federated Learning
M.Yashwanth
Gaurav Kumar Nayak
Aryaveer Singh
Yogesh Singh
Anirban Chakraborty
FedML
98
1
0
31 May 2023
Global-QSGD: Practical Floatless Quantization for Distributed Learning with Theoretical Guarantees
Jihao Xin
Marco Canini
Peter Richtárik
Samuel Horváth
83
2
0
29 May 2023
Lower Bounds and Accelerated Algorithms in Distributed Stochastic Optimization with Communication Compression
Yutong He
Xinmeng Huang
Yiming Chen
W. Yin
Kun Yuan
113
7
0
12 May 2023
ELF: Federated Langevin Algorithms with Primal, Dual and Bidirectional Compression
Avetik G. Karagulyan
Peter Richtárik
FedML
64
6
0
08 Mar 2023
TAMUNA: Doubly Accelerated Distributed Optimization with Local Training, Compression, and Partial Participation
Laurent Condat
Ivan Agarský
Grigory Malinovsky
Peter Richtárik
FedML
108
4
0
20 Feb 2023
Similarity, Compression and Local Steps: Three Pillars of Efficient Communications for Distributed Variational Inequalities
Aleksandr Beznosikov
Martin Takáč
Alexander Gasnikov
76
11
0
15 Feb 2023
CEDAS: A Compressed Decentralized Stochastic Gradient Method with Improved Convergence
Kun-Yen Huang
Shin-Yi Pu
117
10
0
14 Jan 2023
Temporal Difference Learning with Compressed Updates: Error-Feedback meets Reinforcement Learning
A. Mitra
George J. Pappas
Hamed Hassani
64
12
0
03 Jan 2023
Can 5th Generation Local Training Methods Support Client Sampling? Yes!
Michal Grudzieñ
Grigory Malinovsky
Peter Richtárik
104
29
0
29 Dec 2022
On the effectiveness of partial variance reduction in federated learning with heterogeneous data
Yue Liu
Mikkel N. Schmidt
T. S. Alstrøm
Sebastian U. Stich
FedML
91
9
0
05 Dec 2022
Adaptive Top-K in SGD for Communication-Efficient Distributed Learning
Mengzhe Ruan
Guangfeng Yan
Yuanzhang Xiao
Linqi Song
Weitao Xu
69
3
0
24 Oct 2022
Provably Doubly Accelerated Federated Learning: The First Theoretically Successful Combination of Local Training and Communication Compression
Laurent Condat
Ivan Agarský
Peter Richtárik
FedML
113
17
0
24 Oct 2022
FLECS-CGD: A Federated Learning Second-Order Framework via Compression and Sketching with Compressed Gradient Differences
A. Agafonov
Brahim Erraji
Martin Takáč
FedML
86
4
0
18 Oct 2022
EF21-P and Friends: Improved Theoretical Communication Complexity for Distributed Optimization with Bidirectional Compression
Kaja Gruntkowska
Alexander Tyurin
Peter Richtárik
147
24
0
30 Sep 2022
Label driven Knowledge Distillation for Federated Learning with non-IID Data
Minh-Duong Nguyen
Quoc-Viet Pham
D. Hoang
Long Tran-Thanh
Diep N. Nguyen
Won Joo Hwang
63
2
0
29 Sep 2022
Variance Reduced ProxSkip: Algorithm, Theory and Application to Federated Learning
Grigory Malinovsky
Kai Yi
Peter Richtárik
FedML
105
39
0
09 Jul 2022
SoteriaFL: A Unified Framework for Private Federated Learning with Communication Compression
Zhize Li
Haoyu Zhao
Boyue Li
Yuejie Chi
FedML
81
41
0
20 Jun 2022
Communication-Efficient Federated Learning With Data and Client Heterogeneity
Hossein Zakerinia
Shayan Talaei
Giorgi Nadiradze
Dan Alistarh
FedML
126
9
0
20 Jun 2022
Compression and Data Similarity: Combination of Two Techniques for Communication-Efficient Solving of Distributed Variational Inequalities
Aleksandr Beznosikov
Alexander Gasnikov
60
10
0
19 Jun 2022
FedNew: A Communication-Efficient and Privacy-Preserving Newton-Type Method for Federated Learning
Anis Elgabli
Chaouki Ben Issaid
Amrit Singh Bedi
K. Rajawat
M. Bennis
Vaneet Aggarwal
FedML
73
34
0
17 Jun 2022
Lower Bounds and Nearly Optimal Algorithms in Distributed Learning with Communication Compression
Xinmeng Huang
Yiming Chen
W. Yin
Kun Yuan
92
27
0
08 Jun 2022
Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation
Rustem Islamov
Xun Qian
Slavomír Hanzely
M. Safaryan
Peter Richtárik
79
16
0
07 Jun 2022
QUIC-FL: Quick Unbiased Compression for Federated Learning
Ran Ben-Basat
S. Vargaftik
Amit Portnoy
Gil Einziger
Y. Ben-Itzhak
Michael Mitzenmacher
FedML
144
13
0
26 May 2022
Federated Random Reshuffling with Compression and Variance Reduction
Grigory Malinovsky
Peter Richtárik
FedML
93
10
0
08 May 2022
FedShuffle: Recipes for Better Use of Local Work in Federated Learning
Samuel Horváth
Maziar Sanjabi
Lin Xiao
Peter Richtárik
Michael G. Rabbat
FedML
88
21
0
27 Apr 2022
Linear Stochastic Bandits over a Bit-Constrained Channel
A. Mitra
Hamed Hassani
George J. Pappas
79
8
0
02 Mar 2022
Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient Methods
Aleksandr Beznosikov
Eduard A. Gorbunov
Hugo Berard
Nicolas Loizou
100
50
0
15 Feb 2022
FL_PyTorch: optimization research simulator for federated learning
Konstantin Burlachenko
Samuel Horváth
Peter Richtárik
FedML
98
18
0
07 Feb 2022
Faster Rates for Compressed Federated Learning with Client-Variance Reduction
Haoyu Zhao
Konstantin Burlachenko
Zhize Li
Peter Richtárik
FedML
102
13
0
24 Dec 2021
Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning
Xun Qian
Rustem Islamov
M. Safaryan
Peter Richtárik
FedML
82
25
0
02 Nov 2021
Leveraging Spatial and Temporal Correlations in Sparsified Mean Estimation
Divyansh Jhunjhunwala
Ankur Mallick
Advait Gadhikar
S. Kadhe
Gauri Joshi
51
10
0
14 Oct 2021
EDEN: Communication-Efficient and Robust Distributed Mean Estimation for Federated Learning
S. Vargaftik
Ran Ben-Basat
Amit Portnoy
Gal Mendelson
Y. Ben-Itzhak
Michael Mitzenmacher
FedML
114
49
0
19 Aug 2021
ErrorCompensatedX: error compensation for variance reduced algorithms
Hanlin Tang
Yao Li
Ji Liu
Ming Yan
83
10
0
04 Aug 2021
Fed-ensemble: Improving Generalization through Model Ensembling in Federated Learning
Naichen Shi
Fan Lai
Raed Al Kontar
Mosharaf Chowdhury
FedML
85
36
0
21 Jul 2021
Secure Distributed Training at Scale
Eduard A. Gorbunov
Alexander Borzunov
Michael Diskin
Max Ryabinin
FedML
87
15
0
21 Jun 2021
Compressed Gradient Tracking for Decentralized Optimization Over General Directed Networks
Zhuoqing Song
Lei Shi
Shi Pu
Ming Yan
95
25
0
14 Jun 2021
EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback
Peter Richtárik
Igor Sokolov
Ilyas Fatkhullin
69
146
0
09 Jun 2021
1
2
Next