ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.08772
  4. Cited By
Understanding Top-k Sparsification in Distributed Deep Learning

Understanding Top-k Sparsification in Distributed Deep Learning

20 November 2019
S. Shi
X. Chu
Ka Chun Cheung
Simon See
ArXivPDFHTML

Papers citing "Understanding Top-k Sparsification in Distributed Deep Learning"

10 / 10 papers shown
Title
Sparsification Under Siege: Defending Against Poisoning Attacks in Communication-Efficient Federated Learning
Sparsification Under Siege: Defending Against Poisoning Attacks in Communication-Efficient Federated Learning
Zhiyong Jin
Runhua Xu
C. Li
Y. Liu
Jianxin Li
AAML
FedML
37
0
0
30 Apr 2025
Delayed Random Partial Gradient Averaging for Federated Learning
Delayed Random Partial Gradient Averaging for Federated Learning
Xinyi Hu
FedML
39
0
0
31 Dec 2024
Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization
Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization
Zhe Li
Bicheng Ying
Zidong Liu
Haibo Yang
Haibo Yang
FedML
59
3
0
24 May 2024
GraVAC: Adaptive Compression for Communication-Efficient Distributed DL
  Training
GraVAC: Adaptive Compression for Communication-Efficient Distributed DL Training
S. Tyagi
Martin Swany
17
4
0
20 May 2023
Personalized Privacy-Preserving Framework for Cross-Silo Federated
  Learning
Personalized Privacy-Preserving Framework for Cross-Silo Federated Learning
Van Tuan Tran
Huy Hieu Pham
Kok-Seng Wong
FedML
21
7
0
22 Feb 2023
Towards Efficient Communications in Federated Learning: A Contemporary
  Survey
Towards Efficient Communications in Federated Learning: A Contemporary Survey
Zihao Zhao
Yuzhu Mao
Yang Liu
Linqi Song
Ouyang Ye
Xinlei Chen
Wenbo Ding
FedML
43
59
0
02 Aug 2022
DNN gradient lossless compression: Can GenNorm be the answer?
DNN gradient lossless compression: Can GenNorm be the answer?
Zhongzhu Chen
Eduin E. Hernandez
Yu-Chih Huang
Stefano Rini
9
9
0
15 Nov 2021
Rethinking gradient sparsification as total error minimization
Rethinking gradient sparsification as total error minimization
Atal Narayan Sahu
Aritra Dutta
A. Abdelmoniem
Trambak Banerjee
Marco Canini
Panos Kalnis
35
54
0
02 Aug 2021
GOBO: Quantizing Attention-Based NLP Models for Low Latency and Energy
  Efficient Inference
GOBO: Quantizing Attention-Based NLP Models for Low Latency and Energy Efficient Inference
Ali Hadi Zadeh
Isak Edo
Omar Mohamed Awad
Andreas Moshovos
MQ
9
183
0
08 May 2020
Communication-Efficient Decentralized Learning with Sparsification and
  Adaptive Peer Selection
Communication-Efficient Decentralized Learning with Sparsification and Adaptive Peer Selection
Zhenheng Tang
S. Shi
X. Chu
FedML
13
57
0
22 Feb 2020
1