ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1909.07588
  4. Cited By
Communication-Efficient Distributed Learning via Lazily Aggregated
  Quantized Gradients

Communication-Efficient Distributed Learning via Lazily Aggregated Quantized Gradients

17 September 2019
Jun Sun
Tianyi Chen
G. Giannakis
Zaiyue Yang
ArXivPDFHTML

Papers citing "Communication-Efficient Distributed Learning via Lazily Aggregated Quantized Gradients"

21 / 21 papers shown
Title
Decentralized Personalized Federated Learning based on a Conditional
  Sparse-to-Sparser Scheme
Decentralized Personalized Federated Learning based on a Conditional Sparse-to-Sparser Scheme
Qianyu Long
Qiyuan Wang
Christos Anagnostopoulos
Daning Bi
FedML
28
0
0
24 Apr 2024
Communication-Efficient Federated Learning via Regularized Sparse Random
  Networks
Communication-Efficient Federated Learning via Regularized Sparse Random Networks
Mohamad Mestoukirdi
Omid Esrafilian
David Gesbert
Qianrui Li
N. Gresset
FedML
35
0
0
19 Sep 2023
FedDIP: Federated Learning with Extreme Dynamic Pruning and Incremental
  Regularization
FedDIP: Federated Learning with Extreme Dynamic Pruning and Incremental Regularization
Qianyu Long
Christos Anagnostopoulos
S. P. Parambath
Daning Bi
AI4CE
FedML
23
2
0
13 Sep 2023
Blockchain-Based Federated Learning: Incentivizing Data Sharing and
  Penalizing Dishonest Behavior
Blockchain-Based Federated Learning: Incentivizing Data Sharing and Penalizing Dishonest Behavior
Amir Jaberzadeh
A. Shrestha
F. Khan
Mohammed Afaan Shaikh
Bhargav Dave
Jason Geng
FedML
27
3
0
19 Jul 2023
Distributed Linear Bandits under Communication Constraints
Distributed Linear Bandits under Communication Constraints
Sudeep Salgia
Qing Zhao
32
7
0
04 Nov 2022
Adaptive Compression for Communication-Efficient Distributed Training
Adaptive Compression for Communication-Efficient Distributed Training
Maksim Makarenko
Elnur Gasanov
Rustem Islamov
Abdurakhmon Sadiev
Peter Richtárik
39
13
0
31 Oct 2022
GradSkip: Communication-Accelerated Local Gradient Methods with Better
  Computational Complexity
GradSkip: Communication-Accelerated Local Gradient Methods with Better Computational Complexity
A. Maranjyan
M. Safaryan
Peter Richtárik
34
13
0
28 Oct 2022
Distributed Distributionally Robust Optimization with Non-Convex
  Objectives
Distributed Distributionally Robust Optimization with Non-Convex Objectives
Yang Jiao
Kai Yang
Dongjin Song
29
11
0
14 Oct 2022
Lazy Queries Can Reduce Variance in Zeroth-order Optimization
Lazy Queries Can Reduce Variance in Zeroth-order Optimization
Quan-Wu Xiao
Qing Ling
Tianyi Chen
41
0
0
14 Jun 2022
Personalized Federated Learning with Server-Side Information
Personalized Federated Learning with Server-Side Information
Jaehun Song
Min Hwan Oh
Hyung-Sin Kim
FedML
35
8
0
23 May 2022
FedCau: A Proactive Stop Policy for Communication and Computation
  Efficient Federated Learning
FedCau: A Proactive Stop Policy for Communication and Computation Efficient Federated Learning
Afsaneh Mahmoudi
H. S. Ghadikolaei
José Hélio da Cruz Júnior
Carlo Fischione
30
9
0
16 Apr 2022
Communication-Efficient Distributed Learning via Sparse and Adaptive
  Stochastic Gradient
Communication-Efficient Distributed Learning via Sparse and Adaptive Stochastic Gradient
Xiaoge Deng
Dongsheng Li
Tao Sun
Xicheng Lu
FedML
26
0
0
08 Dec 2021
Leveraging Spatial and Temporal Correlations in Sparsified Mean
  Estimation
Leveraging Spatial and Temporal Correlations in Sparsified Mean Estimation
Divyansh Jhunjhunwala
Ankur Mallick
Advait Gadhikar
S. Kadhe
Gauri Joshi
24
10
0
14 Oct 2021
ErrorCompensatedX: error compensation for variance reduced algorithms
ErrorCompensatedX: error compensation for variance reduced algorithms
Hanlin Tang
Yao Li
Ji Liu
Ming Yan
32
9
0
04 Aug 2021
Decentralized Federated Averaging
Decentralized Federated Averaging
Tao Sun
Dongsheng Li
Bao Wang
FedML
54
207
0
23 Apr 2021
Distributed Learning in Wireless Networks: Recent Progress and Future
  Challenges
Distributed Learning in Wireless Networks: Recent Progress and Future Challenges
Mingzhe Chen
Deniz Gündüz
Kaibin Huang
Walid Saad
M. Bennis
Aneta Vulgarakis Feljan
H. Vincent Poor
38
401
0
05 Apr 2021
1-bit Adam: Communication Efficient Large-Scale Training with Adam's
  Convergence Speed
1-bit Adam: Communication Efficient Large-Scale Training with Adam's Convergence Speed
Hanlin Tang
Shaoduo Gan
A. A. Awan
Samyam Rajbhandari
Conglong Li
Xiangru Lian
Ji Liu
Ce Zhang
Yuxiong He
AI4CE
45
84
0
04 Feb 2021
On the Benefits of Multiple Gossip Steps in Communication-Constrained
  Decentralized Optimization
On the Benefits of Multiple Gossip Steps in Communication-Constrained Decentralized Optimization
Abolfazl Hashemi
Anish Acharya
Rudrajit Das
H. Vikalo
Sujay Sanghavi
Inderjit Dhillon
20
7
0
20 Nov 2020
Communication Efficient Distributed Learning with Censored, Quantized,
  and Generalized Group ADMM
Communication Efficient Distributed Learning with Censored, Quantized, and Generalized Group ADMM
Chaouki Ben Issaid
Anis Elgabli
Jihong Park
M. Bennis
Mérouane Debbah
FedML
31
13
0
14 Sep 2020
Intermittent Pulling with Local Compensation for Communication-Efficient
  Federated Learning
Intermittent Pulling with Local Compensation for Communication-Efficient Federated Learning
Yining Qi
Zhihao Qu
Song Guo
Xin Gao
Ruixuan Li
Baoliu Ye
FedML
18
8
0
22 Jan 2020
Q-GADMM: Quantized Group ADMM for Communication Efficient Decentralized
  Machine Learning
Q-GADMM: Quantized Group ADMM for Communication Efficient Decentralized Machine Learning
Anis Elgabli
Jihong Park
Amrit Singh Bedi
Chaouki Ben Issaid
M. Bennis
Vaneet Aggarwal
24
67
0
23 Oct 2019
1