ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1809.10505
  4. Cited By
The Convergence of Sparsified Gradient Methods

The Convergence of Sparsified Gradient Methods

27 September 2018
Dan Alistarh
Torsten Hoefler
M. Johansson
Sarit Khirirat
Nikola Konstantinov
Cédric Renggli
ArXivPDFHTML

Papers citing "The Convergence of Sparsified Gradient Methods"

50 / 124 papers shown
Title
$γ$-FedHT: Stepsize-Aware Hard-Threshold Gradient Compression in Federated Learning
γγγ-FedHT: Stepsize-Aware Hard-Threshold Gradient Compression in Federated Learning
Rongwei Lu
Yutong Jiang
Jinrui Zhang
Chunyang Li
Yifei Zhu
Bin Chen
Zhi Wang
FedML
16
0
0
18 May 2025
Accelerated Distributed Optimization with Compression and Error Feedback
Accelerated Distributed Optimization with Compression and Error Feedback
Yuan Gao
Anton Rodomanov
Jeremy Rack
Sebastian U. Stich
56
0
0
11 Mar 2025
Biased Federated Learning under Wireless Heterogeneity
Muhammad Faraz Ul Abrar
Nicolò Michelusi
FedML
51
0
0
08 Mar 2025
Sketched Adaptive Federated Deep Learning: A Sharp Convergence Analysis
Sketched Adaptive Federated Deep Learning: A Sharp Convergence Analysis
Zhijie Chen
Qiaobo Li
A. Banerjee
FedML
39
0
0
11 Nov 2024
Trustworthiness of Stochastic Gradient Descent in Distributed Learning
Trustworthiness of Stochastic Gradient Descent in Distributed Learning
Hongyang Li
Caesar Wu
Mohammed Chadli
Said Mammar
Pascal Bouvry
56
1
0
28 Oct 2024
LDAdam: Adaptive Optimization from Low-Dimensional Gradient Statistics
LDAdam: Adaptive Optimization from Low-Dimensional Gradient Statistics
Thomas Robert
M. Safaryan
Ionut-Vlad Modoranu
Dan Alistarh
ODL
36
2
0
21 Oct 2024
Boosting Asynchronous Decentralized Learning with Model Fragmentation
Boosting Asynchronous Decentralized Learning with Model Fragmentation
Sayan Biswas
Anne-Marie Kermarrec
Alexis Marouani
Rafael Pires
Rishi Sharma
M. Vos
25
1
0
16 Oct 2024
Language Models as Zero-shot Lossless Gradient Compressors: Towards General Neural Parameter Prior Models
Language Models as Zero-shot Lossless Gradient Compressors: Towards General Neural Parameter Prior Models
Hui-Po Wang
Mario Fritz
37
3
0
26 Sep 2024
Novel Gradient Sparsification Algorithm via Bayesian Inference
Novel Gradient Sparsification Algorithm via Bayesian Inference
Ali Bereyhi
B. Liang
G. Boudreau
Ali Afana
36
2
0
23 Sep 2024
Communication-efficient Vertical Federated Learning via Compressed Error Feedback
Communication-efficient Vertical Federated Learning via Compressed Error Feedback
Pedro Valdeira
João Xavier
Cláudia Soares
Yuejie Chi
FedML
50
4
0
20 Jun 2024
SADDLe: Sharpness-Aware Decentralized Deep Learning with Heterogeneous Data
SADDLe: Sharpness-Aware Decentralized Deep Learning with Heterogeneous Data
Sakshi Choudhary
Sai Aparna Aketi
Kaushik Roy
FedML
45
0
0
22 May 2024
Communication-Efficient Large-Scale Distributed Deep Learning: A
  Comprehensive Survey
Communication-Efficient Large-Scale Distributed Deep Learning: A Comprehensive Survey
Feng Liang
Zhen Zhang
Haifeng Lu
Victor C. M. Leung
Yanyi Guo
Xiping Hu
GNN
39
6
0
09 Apr 2024
Correlated Quantization for Faster Nonconvex Distributed Optimization
Correlated Quantization for Faster Nonconvex Distributed Optimization
Andrei Panferov
Yury Demidovich
Ahmad Rammal
Peter Richtárik
MQ
47
4
0
10 Jan 2024
Kimad: Adaptive Gradient Compression with Bandwidth Awareness
Kimad: Adaptive Gradient Compression with Bandwidth Awareness
Jihao Xin
Ivan Ilin
Shunkang Zhang
Marco Canini
Peter Richtárik
42
3
0
13 Dec 2023
Federated Learning is Better with Non-Homomorphic Encryption
Federated Learning is Better with Non-Homomorphic Encryption
Konstantin Burlachenko
Abdulmajeed Alrowithi
Fahad Ali Albalawi
Peter Richtárik
FedML
47
6
0
04 Dec 2023
Communication Compression for Byzantine Robust Learning: New Efficient
  Algorithms and Improved Rates
Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates
Ahmad Rammal
Kaja Gruntkowska
Nikita Fedin
Eduard A. Gorbunov
Peter Richtárik
47
5
0
15 Oct 2023
FedDIP: Federated Learning with Extreme Dynamic Pruning and Incremental
  Regularization
FedDIP: Federated Learning with Extreme Dynamic Pruning and Incremental Regularization
Qianyu Long
Christos Anagnostopoulos
S. P. Parambath
Daning Bi
AI4CE
FedML
23
2
0
13 Sep 2023
Maestro: Uncovering Low-Rank Structures via Trainable Decomposition
Maestro: Uncovering Low-Rank Structures via Trainable Decomposition
Samuel Horváth
Stefanos Laskaridis
Shashank Rajput
Hongyi Wang
BDL
37
4
0
28 Aug 2023
Revolutionizing Wireless Networks with Federated Learning: A Comprehensive Review
Revolutionizing Wireless Networks with Federated Learning: A Comprehensive Review
Sajjad Emdadi Mahdimahalleh
AI4CE
40
0
0
01 Aug 2023
Compressed Private Aggregation for Scalable and Robust Federated Learning over Massive Networks
Compressed Private Aggregation for Scalable and Robust Federated Learning over Massive Networks
Natalie Lang
Nir Shlezinger
Rafael G. L. DÓliveira
S. E. Rouayheb
FedML
79
4
0
01 Aug 2023
Accelerating Distributed ML Training via Selective Synchronization
Accelerating Distributed ML Training via Selective Synchronization
S. Tyagi
Martin Swany
FedML
41
3
0
16 Jul 2023
Get More for Less in Decentralized Learning Systems
Get More for Less in Decentralized Learning Systems
Akash Dhasade
Anne-Marie Kermarrec
Rafael Pires
Rishi Sharma
Milos Vujasinovic
Jeffrey Wigger
34
7
0
07 Jun 2023
Clip21: Error Feedback for Gradient Clipping
Clip21: Error Feedback for Gradient Clipping
Sarit Khirirat
Eduard A. Gorbunov
Samuel Horváth
Rustem Islamov
Fakhri Karray
Peter Richtárik
42
10
0
30 May 2023
Error Feedback Shines when Features are Rare
Error Feedback Shines when Features are Rare
Peter Richtárik
Elnur Gasanov
Konstantin Burlachenko
38
2
0
24 May 2023
GraVAC: Adaptive Compression for Communication-Efficient Distributed DL
  Training
GraVAC: Adaptive Compression for Communication-Efficient Distributed DL Training
S. Tyagi
Martin Swany
32
4
0
20 May 2023
Convergence and Privacy of Decentralized Nonconvex Optimization with
  Gradient Clipping and Communication Compression
Convergence and Privacy of Decentralized Nonconvex Optimization with Gradient Clipping and Communication Compression
Boyue Li
Yuejie Chi
26
12
0
17 May 2023
Lower Bounds and Accelerated Algorithms in Distributed Stochastic Optimization with Communication Compression
Lower Bounds and Accelerated Algorithms in Distributed Stochastic Optimization with Communication Compression
Yutong He
Xinmeng Huang
Yiming Chen
W. Yin
Kun Yuan
36
7
0
12 May 2023
ELF: Federated Langevin Algorithms with Primal, Dual and Bidirectional
  Compression
ELF: Federated Langevin Algorithms with Primal, Dual and Bidirectional Compression
Avetik G. Karagulyan
Peter Richtárik
FedML
36
6
0
08 Mar 2023
Private Read-Update-Write with Controllable Information Leakage for
  Storage-Efficient Federated Learning with Top $r$ Sparsification
Private Read-Update-Write with Controllable Information Leakage for Storage-Efficient Federated Learning with Top rrr Sparsification
Sajani Vithana
S. Ulukus
FedML
33
5
0
07 Mar 2023
Similarity, Compression and Local Steps: Three Pillars of Efficient
  Communications for Distributed Variational Inequalities
Similarity, Compression and Local Steps: Three Pillars of Efficient Communications for Distributed Variational Inequalities
Aleksandr Beznosikov
Martin Takáč
Alexander Gasnikov
31
10
0
15 Feb 2023
Sparse-SignSGD with Majority Vote for Communication-Efficient
  Distributed Learning
Sparse-SignSGD with Majority Vote for Communication-Efficient Distributed Learning
Chanho Park
Namyoon Lee
FedML
35
3
0
15 Feb 2023
DoCoFL: Downlink Compression for Cross-Device Federated Learning
DoCoFL: Downlink Compression for Cross-Device Federated Learning
Ron Dorfman
S. Vargaftik
Y. Ben-Itzhak
Kfir Y. Levy
FedML
34
19
0
01 Feb 2023
M22: A Communication-Efficient Algorithm for Federated Learning Inspired
  by Rate-Distortion
M22: A Communication-Efficient Algorithm for Federated Learning Inspired by Rate-Distortion
Yangyi Liu
Stefano Rini
Sadaf Salehkalaibar
Jun Chen
FedML
21
4
0
23 Jan 2023
CEDAS: A Compressed Decentralized Stochastic Gradient Method with
  Improved Convergence
CEDAS: A Compressed Decentralized Stochastic Gradient Method with Improved Convergence
Kun-Yen Huang
Shin-Yi Pu
38
9
0
14 Jan 2023
Federated Learning with Flexible Control
Federated Learning with Flexible Control
Shiqiang Wang
Jake B. Perazzone
Mingyue Ji
Kevin S. Chan
FedML
30
17
0
16 Dec 2022
Analysis of Error Feedback in Federated Non-Convex Optimization with
  Biased Compression
Analysis of Error Feedback in Federated Non-Convex Optimization with Biased Compression
Xiaoyun Li
Ping Li
FedML
36
4
0
25 Nov 2022
Adaptive Compression for Communication-Efficient Distributed Training
Adaptive Compression for Communication-Efficient Distributed Training
Maksim Makarenko
Elnur Gasanov
Rustem Islamov
Abdurakhmon Sadiev
Peter Richtárik
46
14
0
31 Oct 2022
GradSkip: Communication-Accelerated Local Gradient Methods with Better
  Computational Complexity
GradSkip: Communication-Accelerated Local Gradient Methods with Better Computational Complexity
Artavazd Maranjyan
M. Safaryan
Peter Richtárik
34
13
0
28 Oct 2022
Communication-Efficient Adam-Type Algorithms for Distributed Data Mining
Communication-Efficient Adam-Type Algorithms for Distributed Data Mining
Wenhan Xian
Feihu Huang
Heng-Chiao Huang
FedML
35
0
0
14 Oct 2022
Downlink Compression Improves TopK Sparsification
Downlink Compression Improves TopK Sparsification
William Zou
H. Sterck
Jun Liu
21
0
0
30 Sep 2022
Empirical Analysis on Top-k Gradient Sparsification for Distributed Deep
  Learning in a Supercomputing Environment
Empirical Analysis on Top-k Gradient Sparsification for Distributed Deep Learning in a Supercomputing Environment
Daegun Yoon
Sangyoon Oh
26
0
0
18 Sep 2022
Private Read Update Write (PRUW) in Federated Submodel Learning (FSL):
  Communication Efficient Schemes With and Without Sparsification
Private Read Update Write (PRUW) in Federated Submodel Learning (FSL): Communication Efficient Schemes With and Without Sparsification
Sajani Vithana
S. Ulukus
FedML
22
19
0
09 Sep 2022
HammingMesh: A Network Topology for Large-Scale Deep Learning
HammingMesh: A Network Topology for Large-Scale Deep Learning
Torsten Hoefler
Tommaso Bonato
Daniele De Sensi
Salvatore Di Girolamo
Shigang Li
Marco Heddes
Jon Belk
Deepak Goel
Miguel Castro
Steve Scott
3DH
GNN
AI4CE
32
20
0
03 Sep 2022
Joint Privacy Enhancement and Quantization in Federated Learning
Joint Privacy Enhancement and Quantization in Federated Learning
Natalie Lang
Elad Sofer
Tomer Shaked
Nir Shlezinger
FedML
39
46
0
23 Aug 2022
Energy and Spectrum Efficient Federated Learning via High-Precision
  Over-the-Air Computation
Energy and Spectrum Efficient Federated Learning via High-Precision Over-the-Air Computation
Liang Li
Chenpei Huang
Dian Shi
Hao Wang
Xiangwei Zhou
Minglei Shu
Miao Pan
FedML
49
9
0
15 Aug 2022
Communication Acceleration of Local Gradient Methods via an Accelerated
  Primal-Dual Algorithm with Inexact Prox
Communication Acceleration of Local Gradient Methods via an Accelerated Primal-Dual Algorithm with Inexact Prox
Abdurakhmon Sadiev
D. Kovalev
Peter Richtárik
32
20
0
08 Jul 2022
Distributed Newton-Type Methods with Communication Compression and
  Bernoulli Aggregation
Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation
Rustem Islamov
Xun Qian
Slavomír Hanzely
M. Safaryan
Peter Richtárik
40
16
0
07 Jun 2022
Fine-tuning Language Models over Slow Networks using Activation
  Compression with Guarantees
Fine-tuning Language Models over Slow Networks using Activation Compression with Guarantees
Jue Wang
Binhang Yuan
Luka Rimanic
Yongjun He
Tri Dao
Beidi Chen
Christopher Ré
Ce Zhang
AI4CE
31
11
0
02 Jun 2022
Private Federated Submodel Learning with Sparsification
Private Federated Submodel Learning with Sparsification
Sajani Vithana
S. Ulukus
FedML
28
10
0
31 May 2022
Communication-Efficient Distributionally Robust Decentralized Learning
Communication-Efficient Distributionally Robust Decentralized Learning
Matteo Zecchin
Marios Kountouris
David Gesbert
25
9
0
31 May 2022
123
Next