ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.11125
  4. Cited By
ScaleCom: Scalable Sparsified Gradient Compression for
  Communication-Efficient Distributed Training

ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training

21 April 2021
Chia-Yu Chen
Jiamin Ni
Songtao Lu
Xiaodong Cui
Pin-Yu Chen
Xiao Sun
Naigang Wang
Swagath Venkataramani
Vijayalakshmi Srinivasan
Wei Zhang
K. Gopalakrishnan
ArXivPDFHTML

Papers citing "ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training"

4 / 4 papers shown
Title
Novel Gradient Sparsification Algorithm via Bayesian Inference
Novel Gradient Sparsification Algorithm via Bayesian Inference
Ali Bereyhi
B. Liang
G. Boudreau
Ali Afana
26
2
0
23 Sep 2024
Improved Convergence Analysis and SNR Control Strategies for Federated
  Learning in the Presence of Noise
Improved Convergence Analysis and SNR Control Strategies for Federated Learning in the Presence of Noise
Antesh Upadhyay
Abolfazl Hashemi
18
9
0
14 Jul 2023
DropCompute: simple and more robust distributed synchronous training via
  compute variance reduction
DropCompute: simple and more robust distributed synchronous training via compute variance reduction
Niv Giladi
Shahar Gottlieb
Moran Shkolnik
A. Karnieli
Ron Banner
Elad Hoffer
Kfir Y. Levy
Daniel Soudry
13
2
0
18 Jun 2023
Distributed Adversarial Training to Robustify Deep Neural Networks at
  Scale
Distributed Adversarial Training to Robustify Deep Neural Networks at Scale
Gaoyuan Zhang
Songtao Lu
Yihua Zhang
Xiangyi Chen
Pin-Yu Chen
Quanfu Fan
Lee Martie
L. Horesh
Min-Fong Hong
Sijia Liu
OOD
17
12
0
13 Jun 2022
1