ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.10761
  4. Cited By
An Efficient Statistical-based Gradient Compression Technique for
  Distributed Training Systems

An Efficient Statistical-based Gradient Compression Technique for Distributed Training Systems

26 January 2021
A. Abdelmoniem
Ahmed Elzanaty
Mohamed-Slim Alouini
Marco Canini
ArXivPDFHTML

Papers citing "An Efficient Statistical-based Gradient Compression Technique for Distributed Training Systems"

5 / 5 papers shown
Title
Novel Gradient Sparsification Algorithm via Bayesian Inference
Novel Gradient Sparsification Algorithm via Bayesian Inference
Ali Bereyhi
B. Liang
G. Boudreau
Ali Afana
26
2
0
23 Sep 2024
Meta-Learning with a Geometry-Adaptive Preconditioner
Meta-Learning with a Geometry-Adaptive Preconditioner
Suhyun Kang
Duhun Hwang
Moonjung Eo
Taesup Kim
Wonjong Rhee
AI4CE
17
15
0
04 Apr 2023
AI-based Fog and Edge Computing: A Systematic Review, Taxonomy and
  Future Directions
AI-based Fog and Edge Computing: A Systematic Review, Taxonomy and Future Directions
Sundas Iftikhar
S. Gill
Chenghao Song
Minxian Xu
M. Aslanpour
...
Félix Cuadrado
Blesson Varghese
Omer F. Rana
Schahram Dustdar
Steve Uhlig
34
130
0
09 Dec 2022
Rethinking gradient sparsification as total error minimization
Rethinking gradient sparsification as total error minimization
Atal Narayan Sahu
Aritra Dutta
A. Abdelmoniem
Trambak Banerjee
Marco Canini
Panos Kalnis
25
53
0
02 Aug 2021
Megatron-LM: Training Multi-Billion Parameter Language Models Using
  Model Parallelism
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
M. Shoeybi
M. Patwary
Raul Puri
P. LeGresley
Jared Casper
Bryan Catanzaro
MoE
243
1,791
0
17 Sep 2019
1