ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.03009
  4. Cited By
Communication optimization strategies for distributed deep neural
  network training: A survey

Communication optimization strategies for distributed deep neural network training: A survey

6 March 2020
Shuo Ouyang
Dezun Dong
Yemao Xu
Liquan Xiao
ArXivPDFHTML

Papers citing "Communication optimization strategies for distributed deep neural network training: A survey"

5 / 5 papers shown
Title
Fair and Efficient Distributed Edge Learning with Hybrid Multipath TCP
Fair and Efficient Distributed Edge Learning with Hybrid Multipath TCP
Shiva Raj Pokhrel
Jinho D. Choi
A. Walid
25
6
0
03 Nov 2022
DBS: Dynamic Batch Size For Distributed Deep Neural Network Training
DBS: Dynamic Batch Size For Distributed Deep Neural Network Training
Qing Ye
Yuhao Zhou
Mingjia Shi
Yanan Sun
Jiancheng Lv
14
11
0
23 Jul 2020
Enabling Compute-Communication Overlap in Distributed Deep Learning
  Training Platforms
Enabling Compute-Communication Overlap in Distributed Deep Learning Training Platforms
Saeed Rashidi
Matthew Denton
Srinivas Sridharan
S. Srinivasan
Amoghavarsha Suresh
Jade Nie
T. Krishna
13
45
0
30 Jun 2020
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
273
2,886
0
15 Sep 2016
Optimal Distributed Online Prediction using Mini-Batches
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
166
683
0
07 Dec 2010
1