ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.06443
  4. Cited By
Communication Compression for Decentralized Training

Communication Compression for Decentralized Training

17 March 2018
Hanlin Tang
Shaoduo Gan
Ce Zhang
Tong Zhang
Ji Liu
ArXivPDFHTML

Papers citing "Communication Compression for Decentralized Training"

46 / 46 papers shown
Title
Scalable Decentralized Learning with Teleportation
Scalable Decentralized Learning with Teleportation
Yuki Takezawa
Sebastian U. Stich
56
1
0
25 Jan 2025
Fully First-Order Methods for Decentralized Bilevel Optimization
Fully First-Order Methods for Decentralized Bilevel Optimization
Xiaoyu Wang
Xuxing Chen
Shiqian Ma
Tong Zhang
36
0
0
25 Oct 2024
Ordered Momentum for Asynchronous SGD
Ordered Momentum for Asynchronous SGD
Chang-Wei Shi
Yi-Rui Yang
Wu-Jun Li
ODL
52
0
0
27 Jul 2024
SADDLe: Sharpness-Aware Decentralized Deep Learning with Heterogeneous Data
SADDLe: Sharpness-Aware Decentralized Deep Learning with Heterogeneous Data
Sakshi Choudhary
Sai Aparna Aketi
Kaushik Roy
FedML
39
0
0
22 May 2024
Compressed and Sparse Models for Non-Convex Decentralized Learning
Compressed and Sparse Models for Non-Convex Decentralized Learning
Andrew Campbell
Hang Liu
Leah Woldemariam
Anna Scaglione
20
0
0
09 Nov 2023
Improved Convergence Analysis and SNR Control Strategies for Federated
  Learning in the Presence of Noise
Improved Convergence Analysis and SNR Control Strategies for Federated Learning in the Presence of Noise
Antesh Upadhyay
Abolfazl Hashemi
31
9
0
14 Jul 2023
Get More for Less in Decentralized Learning Systems
Get More for Less in Decentralized Learning Systems
Akash Dhasade
Anne-Marie Kermarrec
Rafael Pires
Rishi Sharma
Milos Vujasinovic
Jeffrey Wigger
26
7
0
07 Jun 2023
Beyond Exponential Graph: Communication-Efficient Topologies for
  Decentralized Learning via Finite-time Convergence
Beyond Exponential Graph: Communication-Efficient Topologies for Decentralized Learning via Finite-time Convergence
Yuki Takezawa
Ryoma Sato
Han Bao
Kenta Niwa
M. Yamada
31
9
0
19 May 2023
Convergence and Privacy of Decentralized Nonconvex Optimization with
  Gradient Clipping and Communication Compression
Convergence and Privacy of Decentralized Nonconvex Optimization with Gradient Clipping and Communication Compression
Boyue Li
Yuejie Chi
21
12
0
17 May 2023
M22: A Communication-Efficient Algorithm for Federated Learning Inspired
  by Rate-Distortion
M22: A Communication-Efficient Algorithm for Federated Learning Inspired by Rate-Distortion
Yangyi Liu
Stefano Rini
Sadaf Salehkalaibar
Jun Chen
FedML
11
4
0
23 Jan 2023
CEDAS: A Compressed Decentralized Stochastic Gradient Method with
  Improved Convergence
CEDAS: A Compressed Decentralized Stochastic Gradient Method with Improved Convergence
Kun-Yen Huang
Shin-Yi Pu
30
9
0
14 Jan 2023
Decentralized Federated Learning: Fundamentals, State of the Art,
  Frameworks, Trends, and Challenges
Decentralized Federated Learning: Fundamentals, State of the Art, Frameworks, Trends, and Challenges
Enrique Tomás Martínez Beltrán
Mario Quiles Pérez
Pedro Miguel Sánchez Sánchez
Sergio López Bernal
Gérome Bovet
M. Pérez
Gregorio Martínez Pérez
Alberto Huertas Celdrán
FedML
18
221
0
15 Nov 2022
Decentralized Training of Foundation Models in Heterogeneous
  Environments
Decentralized Training of Foundation Models in Heterogeneous Environments
Binhang Yuan
Yongjun He
Jared Davis
Tianyi Zhang
Tri Dao
Beidi Chen
Percy Liang
Christopher Ré
Ce Zhang
22
90
0
02 Jun 2022
A review of Federated Learning in Intrusion Detection Systems for IoT
A review of Federated Learning in Intrusion Detection Systems for IoT
Aitor Belenguer
J. Navaridas
J. A. Pascual
20
15
0
26 Apr 2022
Enabling All In-Edge Deep Learning: A Literature Review
Enabling All In-Edge Deep Learning: A Literature Review
Praveen Joshi
Mohammed Hasanuzzaman
Chandra Thapa
Haithem Afli
T. Scully
21
22
0
07 Apr 2022
BEER: Fast $O(1/T)$ Rate for Decentralized Nonconvex Optimization with
  Communication Compression
BEER: Fast O(1/T)O(1/T)O(1/T) Rate for Decentralized Nonconvex Optimization with Communication Compression
Haoyu Zhao
Boyue Li
Zhize Li
Peter Richtárik
Yuejie Chi
19
48
0
31 Jan 2022
Bristle: Decentralized Federated Learning in Byzantine, Non-i.i.d.
  Environments
Bristle: Decentralized Federated Learning in Byzantine, Non-i.i.d. Environments
Joost Verbraeken
M. Vos
J. Pouwelse
28
4
0
21 Oct 2021
Decentralized Composite Optimization with Compression
Decentralized Composite Optimization with Compression
Yao Li
Xiaorui Liu
Jiliang Tang
Ming Yan
Kun Yuan
19
9
0
10 Aug 2021
Communication Efficiency in Federated Learning: Achievements and
  Challenges
Communication Efficiency in Federated Learning: Achievements and Challenges
Osama Shahid
Seyedamin Pouriyeh
R. Parizi
Quan Z. Sheng
Gautam Srivastava
Liang Zhao
FedML
24
74
0
23 Jul 2021
BAGUA: Scaling up Distributed Learning with System Relaxations
BAGUA: Scaling up Distributed Learning with System Relaxations
Shaoduo Gan
Xiangru Lian
Rui Wang
Jianbin Chang
Chengjun Liu
...
Jiawei Jiang
Binhang Yuan
Sen Yang
Ji Liu
Ce Zhang
23
30
0
03 Jul 2021
ResIST: Layer-Wise Decomposition of ResNets for Distributed Training
ResIST: Layer-Wise Decomposition of ResNets for Distributed Training
Chen Dun
Cameron R. Wolfe
C. Jermaine
Anastasios Kyrillidis
16
21
0
02 Jul 2021
Federated Learning for Internet of Things: A Federated Learning
  Framework for On-device Anomaly Data Detection
Federated Learning for Internet of Things: A Federated Learning Framework for On-device Anomaly Data Detection
Tuo Zhang
Chaoyang He
Tian-Shya Ma
Lei Gao
Mark Ma
Salman Avestimehr
FedML
16
112
0
15 Jun 2021
SpreadGNN: Serverless Multi-task Federated Learning for Graph Neural
  Networks
SpreadGNN: Serverless Multi-task Federated Learning for Graph Neural Networks
Chaoyang He
Emir Ceyani
Keshav Balasubramanian
M. Annavaram
Salman Avestimehr
FedML
19
50
0
04 Jun 2021
Towards Demystifying Serverless Machine Learning Training
Towards Demystifying Serverless Machine Learning Training
Jiawei Jiang
Shaoduo Gan
Yue Liu
Fanlin Wang
Gustavo Alonso
Ana Klimovic
Ankit Singla
Wentao Wu
Ce Zhang
19
121
0
17 May 2021
DataLens: Scalable Privacy Preserving Training via Gradient Compression
  and Aggregation
DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation
Boxin Wang
Fan Wu
Yunhui Long
Luka Rimanic
Ce Zhang
Bo-wen Li
FedML
31
63
0
20 Mar 2021
On the Utility of Gradient Compression in Distributed Training Systems
On the Utility of Gradient Compression in Distributed Training Systems
Saurabh Agarwal
Hongyi Wang
Shivaram Venkataraman
Dimitris Papailiopoulos
23
46
0
28 Feb 2021
Straggler-Resilient Distributed Machine Learning with Dynamic Backup
  Workers
Straggler-Resilient Distributed Machine Learning with Dynamic Backup Workers
Guojun Xiong
Gang Yan
Rahul Singh
Jian Li
28
12
0
11 Feb 2021
Sparse-Push: Communication- & Energy-Efficient Decentralized Distributed
  Learning over Directed & Time-Varying Graphs with non-IID Datasets
Sparse-Push: Communication- & Energy-Efficient Decentralized Distributed Learning over Directed & Time-Varying Graphs with non-IID Datasets
Sai Aparna Aketi
Amandeep Singh
J. Rabaey
21
10
0
10 Feb 2021
Faster Non-Convex Federated Learning via Global and Local Momentum
Faster Non-Convex Federated Learning via Global and Local Momentum
Rudrajit Das
Anish Acharya
Abolfazl Hashemi
Sujay Sanghavi
Inderjit S. Dhillon
Ufuk Topcu
FedML
27
82
0
07 Dec 2020
On Communication Compression for Distributed Optimization on
  Heterogeneous Data
On Communication Compression for Distributed Optimization on Heterogeneous Data
Sebastian U. Stich
45
22
0
04 Sep 2020
Federated Learning with Compression: Unified Analysis and Sharp
  Guarantees
Federated Learning with Compression: Unified Analysis and Sharp Guarantees
Farzin Haddadpour
Mohammad Mahdi Kamani
Aryan Mokhtari
M. Mahdavi
FedML
23
271
0
02 Jul 2020
Optimal Complexity in Decentralized Training
Optimal Complexity in Decentralized Training
Yucheng Lu
Christopher De Sa
25
71
0
15 Jun 2020
CDC: Classification Driven Compression for Bandwidth Efficient
  Edge-Cloud Collaborative Deep Learning
CDC: Classification Driven Compression for Bandwidth Efficient Edge-Cloud Collaborative Deep Learning
Yuanrui Dong
Peng Zhao
Hanqiao Yu
Cong Zhao
Shusen Yang
23
19
0
04 May 2020
A Unified Theory of Decentralized SGD with Changing Topology and Local
  Updates
A Unified Theory of Decentralized SGD with Changing Topology and Local Updates
Anastasia Koloskova
Nicolas Loizou
Sadra Boreiri
Martin Jaggi
Sebastian U. Stich
FedML
39
491
0
23 Mar 2020
Stochastic-Sign SGD for Federated Learning with Theoretical Guarantees
Stochastic-Sign SGD for Federated Learning with Theoretical Guarantees
Richeng Jin
Yufan Huang
Xiaofan He
H. Dai
Tianfu Wu
FedML
22
63
0
25 Feb 2020
Communication-Efficient Decentralized Learning with Sparsification and
  Adaptive Peer Selection
Communication-Efficient Decentralized Learning with Sparsification and Adaptive Peer Selection
Zhenheng Tang
S. Shi
X. Chu
FedML
13
57
0
22 Feb 2020
Communication-Efficient Edge AI: Algorithms and Systems
Communication-Efficient Edge AI: Algorithms and Systems
Yuanming Shi
Kai Yang
Tao Jiang
Jun Zhang
Khaled B. Letaief
GNN
17
326
0
22 Feb 2020
Distributed Non-Convex Optimization with Sublinear Speedup under
  Intermittent Client Availability
Distributed Non-Convex Optimization with Sublinear Speedup under Intermittent Client Availability
Yikai Yan
Chaoyue Niu
Yucheng Ding
Zhenzhe Zheng
Fan Wu
Guihai Chen
Shaojie Tang
Zhihua Wu
FedML
36
37
0
18 Feb 2020
On the Discrepancy between the Theoretical Analysis and Practical
  Implementations of Compressed Communication for Distributed Deep Learning
On the Discrepancy between the Theoretical Analysis and Practical Implementations of Compressed Communication for Distributed Deep Learning
Aritra Dutta
El Houcine Bergou
A. Abdelmoniem
Chen-Yu Ho
Atal Narayan Sahu
Marco Canini
Panos Kalnis
25
76
0
19 Nov 2019
Hyper-Sphere Quantization: Communication-Efficient SGD for Federated
  Learning
Hyper-Sphere Quantization: Communication-Efficient SGD for Federated Learning
XINYAN DAI
Xiao Yan
Kaiwen Zhou
Han Yang
K. K. Ng
James Cheng
Yu Fan
FedML
6
47
0
12 Nov 2019
Communication-Efficient Local Decentralized SGD Methods
Communication-Efficient Local Decentralized SGD Methods
Xiang Li
Wenhao Yang
Shusen Wang
Zhihua Zhang
16
53
0
21 Oct 2019
Gradient Descent with Compressed Iterates
Gradient Descent with Compressed Iterates
Ahmed Khaled
Peter Richtárik
16
22
0
10 Sep 2019
Edge Intelligence: Paving the Last Mile of Artificial Intelligence with
  Edge Computing
Edge Intelligence: Paving the Last Mile of Artificial Intelligence with Edge Computing
Zhi Zhou
Xu Chen
En Li
Liekang Zeng
Ke Luo
Junshan Zhang
19
1,418
0
24 May 2019
MATCHA: Speeding Up Decentralized SGD via Matching Decomposition
  Sampling
MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling
Jianyu Wang
Anit Kumar Sahu
Zhouyi Yang
Gauri Joshi
S. Kar
21
159
0
23 May 2019
Decentralized Online Learning: Take Benefits from Others' Data without
  Sharing Your Own to Track Global Trend
Decentralized Online Learning: Take Benefits from Others' Data without Sharing Your Own to Track Global Trend
Wendi Wu
Zongren Li
Yawei Zhao
Chenkai Yu
P. Zhao
Ji Liu
FedML
11
16
0
29 Jan 2019
Optimal Distributed Online Prediction using Mini-Batches
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
171
683
0
07 Dec 2010
1