ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.11364
  4. Cited By
Acceleration for Compressed Gradient Descent in Distributed and
  Federated Optimization

Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization

26 February 2020
Zhize Li
D. Kovalev
Xun Qian
Peter Richtárik
    FedML
    AI4CE
ArXivPDFHTML

Papers citing "Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization"

29 / 29 papers shown
Title
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression
Laurent Condat
A. Maranjyan
Peter Richtárik
39
3
0
07 Mar 2024
Correlated Quantization for Faster Nonconvex Distributed Optimization
Correlated Quantization for Faster Nonconvex Distributed Optimization
Andrei Panferov
Yury Demidovich
Ahmad Rammal
Peter Richtárik
MQ
28
4
0
10 Jan 2024
Communication Compression for Byzantine Robust Learning: New Efficient
  Algorithms and Improved Rates
Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates
Ahmad Rammal
Kaja Gruntkowska
Nikita Fedin
Eduard A. Gorbunov
Peter Richtárik
37
5
0
15 Oct 2023
Error Feedback Shines when Features are Rare
Error Feedback Shines when Features are Rare
Peter Richtárik
Elnur Gasanov
Konstantin Burlachenko
23
2
0
24 May 2023
Convergence and Privacy of Decentralized Nonconvex Optimization with
  Gradient Clipping and Communication Compression
Convergence and Privacy of Decentralized Nonconvex Optimization with Gradient Clipping and Communication Compression
Boyue Li
Yuejie Chi
21
12
0
17 May 2023
Coresets for Vertical Federated Learning: Regularized Linear Regression
  and $K$-Means Clustering
Coresets for Vertical Federated Learning: Regularized Linear Regression and KKK-Means Clustering
Lingxiao Huang
Zhize Li
Jialin Sun
Haoyu Zhao
FedML
31
9
0
26 Oct 2022
lo-fi: distributed fine-tuning without communication
lo-fi: distributed fine-tuning without communication
Mitchell Wortsman
Suchin Gururangan
Shen Li
Ali Farhadi
Ludwig Schmidt
Michael G. Rabbat
Ari S. Morcos
27
24
0
19 Oct 2022
Accelerated Federated Learning with Decoupled Adaptive Optimization
Accelerated Federated Learning with Decoupled Adaptive Optimization
Jiayin Jin
Jiaxiang Ren
Yang Zhou
Lingjuan Lyu
Ji Liu
Dejing Dou
AI4CE
FedML
19
51
0
14 Jul 2022
Linear Stochastic Bandits over a Bit-Constrained Channel
Linear Stochastic Bandits over a Bit-Constrained Channel
A. Mitra
Hamed Hassani
George J. Pappas
34
8
0
02 Mar 2022
Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient
  Methods
Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient Methods
Aleksandr Beznosikov
Eduard A. Gorbunov
Hugo Berard
Nicolas Loizou
19
47
0
15 Feb 2022
BEER: Fast $O(1/T)$ Rate for Decentralized Nonconvex Optimization with
  Communication Compression
BEER: Fast O(1/T)O(1/T)O(1/T) Rate for Decentralized Nonconvex Optimization with Communication Compression
Haoyu Zhao
Boyue Li
Zhize Li
Peter Richtárik
Yuejie Chi
19
48
0
31 Jan 2022
Basis Matters: Better Communication-Efficient Second Order Methods for
  Federated Learning
Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning
Xun Qian
Rustem Islamov
M. Safaryan
Peter Richtárik
FedML
19
23
0
02 Nov 2021
An Operator Splitting View of Federated Learning
An Operator Splitting View of Federated Learning
Saber Malekmohammadi
K. Shaloudegi
Zeou Hu
Yaoliang Yu
FedML
24
2
0
12 Aug 2021
Vertical Federated Learning without Revealing Intersection Membership
Vertical Federated Learning without Revealing Intersection Membership
Jiankai Sun
Xin Yang
Yuanshun Yao
Aonan Zhang
Weihao Gao
Junyuan Xie
Chong-Jun Wang
FedML
23
37
0
10 Jun 2021
FedNL: Making Newton-Type Methods Applicable to Federated Learning
FedNL: Making Newton-Type Methods Applicable to Federated Learning
M. Safaryan
Rustem Islamov
Xun Qian
Peter Richtárik
FedML
20
77
0
05 Jun 2021
EasyFL: A Low-code Federated Learning Platform For Dummies
EasyFL: A Low-code Federated Learning Platform For Dummies
Weiming Zhuang
Xin Gan
Yonggang Wen
Shuai Zhang
FedML
19
46
0
17 May 2021
ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method
ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method
Zhize Li
33
14
0
21 Mar 2021
Personalized Federated Learning using Hypernetworks
Personalized Federated Learning using Hypernetworks
Aviv Shamsian
Aviv Navon
Ethan Fetaya
Gal Chechik
FedML
25
324
0
08 Mar 2021
Moshpit SGD: Communication-Efficient Decentralized Training on
  Heterogeneous Unreliable Devices
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices
Max Ryabinin
Eduard A. Gorbunov
Vsevolod Plokhotnyuk
Gennady Pekhimenko
29
31
0
04 Mar 2021
Distributed Second Order Methods with Fast Rates and Compressed
  Communication
Distributed Second Order Methods with Fast Rates and Compressed Communication
Rustem Islamov
Xun Qian
Peter Richtárik
21
51
0
14 Feb 2021
Linear Convergence in Federated Learning: Tackling Client Heterogeneity
  and Sparse Gradients
Linear Convergence in Federated Learning: Tackling Client Heterogeneity and Sparse Gradients
A. Mitra
Rayana H. Jaafar
George J. Pappas
Hamed Hassani
FedML
55
157
0
14 Feb 2021
Federated Learning with Nesterov Accelerated Gradient
Federated Learning with Nesterov Accelerated Gradient
Zhengjie Yang
Wei Bao
Dong Yuan
Nguyen H. Tran
Albert Y. Zomaya
FedML
19
29
0
18 Sep 2020
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for
  Nonconvex Optimization
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization
Zhize Li
Hongyan Bao
Xiangliang Zhang
Peter Richtárik
ODL
24
125
0
25 Aug 2020
Communication-Efficient and Distributed Learning Over Wireless Networks:
  Principles and Applications
Communication-Efficient and Distributed Learning Over Wireless Networks: Principles and Applications
Jihong Park
S. Samarakoon
Anis Elgabli
Joongheon Kim
M. Bennis
Seong-Lyun Kim
Mérouane Debbah
25
161
0
06 Aug 2020
Tackling the Objective Inconsistency Problem in Heterogeneous Federated
  Optimization
Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization
Jianyu Wang
Qinghua Liu
Hao Liang
Gauri Joshi
H. Vincent Poor
MoMe
FedML
14
1,295
0
15 Jul 2020
Federated Learning with Compression: Unified Analysis and Sharp
  Guarantees
Federated Learning with Compression: Unified Analysis and Sharp Guarantees
Farzin Haddadpour
Mohammad Mahdi Kamani
Aryan Mokhtari
M. Mahdavi
FedML
25
271
0
02 Jul 2020
Detached Error Feedback for Distributed SGD with Random Sparsification
Detached Error Feedback for Distributed SGD with Random Sparsification
An Xu
Heng-Chiao Huang
33
9
0
11 Apr 2020
Personalized Federated Learning: A Meta-Learning Approach
Personalized Federated Learning: A Meta-Learning Approach
Alireza Fallah
Aryan Mokhtari
Asuman Ozdaglar
FedML
31
560
0
19 Feb 2020
Q-GADMM: Quantized Group ADMM for Communication Efficient Decentralized
  Machine Learning
Q-GADMM: Quantized Group ADMM for Communication Efficient Decentralized Machine Learning
Anis Elgabli
Jihong Park
Amrit Singh Bedi
Chaouki Ben Issaid
M. Bennis
Vaneet Aggarwal
16
67
0
23 Oct 2019
1