ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.07389
  4. Cited By
3LC: Lightweight and Effective Traffic Compression for Distributed
  Machine Learning

3LC: Lightweight and Effective Traffic Compression for Distributed Machine Learning

21 February 2018
Hyeontaek Lim
D. Andersen
M. Kaminsky
ArXiv (abs)PDFHTML

Papers citing "3LC: Lightweight and Effective Traffic Compression for Distributed Machine Learning"

21 / 21 papers shown
Communication-Efficient Large-Scale Distributed Deep Learning: A
  Comprehensive Survey
Communication-Efficient Large-Scale Distributed Deep Learning: A Comprehensive Survey
Feng Liang
Zhen Zhang
Haifeng Lu
Victor C. M. Leung
Yanyi Guo
Xiping Hu
GNN
350
24
0
09 Apr 2024
Enhancing Efficiency in Multidevice Federated Learning through Data Selection
Enhancing Efficiency in Multidevice Federated Learning through Data Selection
Fan Mo
Mohammad Malekzadeh
S. Chatterjee
F. Kawsar
Akhil Mathur
FedML
381
4
0
08 Nov 2022
L-GreCo: Layerwise-Adaptive Gradient Compression for Efficient and
  Accurate Deep Learning
L-GreCo: Layerwise-Adaptive Gradient Compression for Efficient and Accurate Deep Learning
Mohammadreza Alimohammadi
I. Markov
Elias Frantar
Dan Alistarh
219
4
0
31 Oct 2022
Correlated quantization for distributed mean estimation and optimization
Correlated quantization for distributed mean estimation and optimizationInternational Conference on Machine Learning (ICML), 2022
A. Suresh
Ziteng Sun
Jae Hun Ro
Felix X. Yu
261
16
0
09 Mar 2022
Optimizing the Communication-Accuracy Trade-off in Federated Learning
  with Rate-Distortion Theory
Optimizing the Communication-Accuracy Trade-off in Federated Learning with Rate-Distortion Theory
Nicole Mitchell
Johannes Ballé
Zachary B. Charles
Jakub Konecný
FedML
286
25
0
07 Jan 2022
FastSGD: A Fast Compressed SGD Framework for Distributed Machine
  Learning
FastSGD: A Fast Compressed SGD Framework for Distributed Machine Learning
Keyu Yang
Lu Chen
Zhihao Zeng
Yunjun Gao
168
9
0
08 Dec 2021
CGX: Adaptive System Support for Communication-Efficient Deep Learning
CGX: Adaptive System Support for Communication-Efficient Deep Learning
I. Markov
Hamidreza Ramezanikebrya
Dan Alistarh
GNN
330
5
0
16 Nov 2021
Distributed Learning Systems with First-order Methods
Distributed Learning Systems with First-order Methods
Ji Liu
Ce Zhang
165
46
0
12 Apr 2021
Software-Hardware Co-design for Fast and Scalable Training of Deep
  Learning Recommendation Models
Software-Hardware Co-design for Fast and Scalable Training of Deep Learning Recommendation ModelsInternational Symposium on Computer Architecture (ISCA), 2021
Dheevatsa Mudigere
Y. Hao
Jianyu Huang
Zhihao Jia
Andrew Tulloch
...
Ajit Mathews
Lin Qiao
M. Smelyanskiy
Bill Jia
Vijay Rao
561
176
0
12 Apr 2021
MergeComp: A Compression Scheduler for Scalable Communication-Efficient
  Distributed Training
MergeComp: A Compression Scheduler for Scalable Communication-Efficient Distributed Training
Zhuang Wang
X. Wu
T. Ng
GNN
109
4
0
28 Mar 2021
CrossoverScheduler: Overlapping Multiple Distributed Training
  Applications in a Crossover Manner
CrossoverScheduler: Overlapping Multiple Distributed Training Applications in a Crossover Manner
Cheng Luo
L. Qu
Youshan Miao
Jun Zhou
Y. Xiong
103
0
0
14 Mar 2021
DeepReduce: A Sparse-tensor Communication Framework for Distributed Deep
  Learning
DeepReduce: A Sparse-tensor Communication Framework for Distributed Deep Learning
Kelly Kostopoulou
Hang Xu
Aritra Dutta
Xin Li
A. Ntoulas
Panos Kalnis
132
7
0
05 Feb 2021
A Better Alternative to Error Feedback for Communication-Efficient
  Distributed Learning
A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning
Samuel Horváth
Peter Richtárik
216
64
0
19 Jun 2020
Is Network the Bottleneck of Distributed Training?
Is Network the Bottleneck of Distributed Training?
Zhen Zhang
Chaokun Chang
Yanghua Peng
Yida Wang
R. Arora
Xin Jin
249
92
0
17 Jun 2020
Breaking (Global) Barriers in Parallel Stochastic Optimization with Wait-Avoiding Group Averaging
Breaking (Global) Barriers in Parallel Stochastic Optimization with Wait-Avoiding Group AveragingIEEE Transactions on Parallel and Distributed Systems (TPDS), 2020
Shigang Li
Tal Ben-Nun
Giorgi Nadiradze
Salvatore Di Girolamo
Nikoli Dryden
Dan Alistarh
Torsten Hoefler
408
15
0
30 Apr 2020
Communication optimization strategies for distributed deep neural
  network training: A survey
Communication optimization strategies for distributed deep neural network training: A survey
Shuo Ouyang
Dezun Dong
Yemao Xu
Liquan Xiao
312
12
0
06 Mar 2020
On Biased Compression for Distributed Learning
On Biased Compression for Distributed LearningJournal of machine learning research (JMLR), 2020
Aleksandr Beznosikov
Samuel Horváth
Peter Richtárik
M. Safaryan
292
218
0
27 Feb 2020
On the Discrepancy between the Theoretical Analysis and Practical
  Implementations of Compressed Communication for Distributed Deep Learning
On the Discrepancy between the Theoretical Analysis and Practical Implementations of Compressed Communication for Distributed Deep LearningAAAI Conference on Artificial Intelligence (AAAI), 2019
Aritra Dutta
El Houcine Bergou
A. Abdelmoniem
Chen-Yu Ho
Atal Narayan Sahu
Marco Canini
Panos Kalnis
154
86
0
19 Nov 2019
Progressive Compressed Records: Taking a Byte out of Deep Learning Data
Progressive Compressed Records: Taking a Byte out of Deep Learning DataProceedings of the VLDB Endowment (PVLDB), 2019
Michael Kuchnik
George Amvrosiadis
Virginia Smith
478
10
0
01 Nov 2019
Taming Momentum in a Distributed Asynchronous Environment
Taming Momentum in a Distributed Asynchronous Environment
Ido Hakimi
Saar Barkai
Moshe Gabel
Assaf Schuster
301
24
0
26 Jul 2019
Natural Compression for Distributed Deep Learning
Natural Compression for Distributed Deep LearningMathematical and Scientific Machine Learning (MSML), 2019
Samuel Horváth
Chen-Yu Ho
L. Horvath
Atal Narayan Sahu
Marco Canini
Peter Richtárik
351
166
0
27 May 2019
1