Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2402.13781
Cited By
Preserving Near-Optimal Gradient Sparsification Cost for Scalable Distributed Deep Learning
21 February 2024
Daegun Yoon
Sangyoon Oh
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Preserving Near-Optimal Gradient Sparsification Cost for Scalable Distributed Deep Learning"
2 / 2 papers shown
Title
ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training
Chia-Yu Chen
Jiamin Ni
Songtao Lu
Xiaodong Cui
Pin-Yu Chen
...
Naigang Wang
Swagath Venkataramani
Vijayalakshmi Srinivasan
Wei Zhang
K. Gopalakrishnan
27
65
0
21 Apr 2021
An Efficient Statistical-based Gradient Compression Technique for Distributed Training Systems
A. Abdelmoniem
Ahmed Elzanaty
Mohamed-Slim Alouini
Marco Canini
49
73
0
26 Jan 2021
1