ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1509.04210
  4. Cited By
Model Accuracy and Runtime Tradeoff in Distributed Deep Learning:A
  Systematic Study

Model Accuracy and Runtime Tradeoff in Distributed Deep Learning:A Systematic Study

14 September 2015
Suyog Gupta
Wei Zhang
Fei Wang
ArXivPDFHTML

Papers citing "Model Accuracy and Runtime Tradeoff in Distributed Deep Learning:A Systematic Study"

27 / 27 papers shown
Title
Taming Resource Heterogeneity In Distributed ML Training With Dynamic
  Batching
Taming Resource Heterogeneity In Distributed ML Training With Dynamic Batching
S. Tyagi
Prateek Sharma
26
22
0
20 May 2023
STSyn: Speeding Up Local SGD with Straggler-Tolerant Synchronization
STSyn: Speeding Up Local SGD with Straggler-Tolerant Synchronization
Feng Zhu
Jingjing Zhang
Xin Eric Wang
33
3
0
06 Oct 2022
Byzantine Fault Tolerance in Distributed Machine Learning : a Survey
Byzantine Fault Tolerance in Distributed Machine Learning : a Survey
Djamila Bouhata
Hamouma Moumen
Moumen Hamouma
Ahcène Bounceur
AI4CE
34
7
0
05 May 2022
DSAG: A mixed synchronous-asynchronous iterative method for
  straggler-resilient learning
DSAG: A mixed synchronous-asynchronous iterative method for straggler-resilient learning
A. Severinson
E. Rosnes
S. E. Rouayheb
Alexandre Graell i Amat
24
2
0
27 Nov 2021
SpreadGNN: Serverless Multi-task Federated Learning for Graph Neural
  Networks
SpreadGNN: Serverless Multi-task Federated Learning for Graph Neural Networks
Chaoyang He
Emir Ceyani
Keshav Balasubramanian
M. Annavaram
Salman Avestimehr
FedML
25
50
0
04 Jun 2021
ScaleFreeCTR: MixCache-based Distributed Training System for CTR Models
  with Huge Embedding Table
ScaleFreeCTR: MixCache-based Distributed Training System for CTR Models with Huge Embedding Table
Huifeng Guo
Wei Guo
Yong Gao
Ruiming Tang
Xiuqiang He
Wenzhi Liu
43
20
0
17 Apr 2021
Consistent Lock-free Parallel Stochastic Gradient Descent for Fast and
  Stable Convergence
Consistent Lock-free Parallel Stochastic Gradient Descent for Fast and Stable Convergence
Karl Bäckström
Ivan Walulya
Marina Papatriantafilou
P. Tsigas
34
5
0
17 Feb 2021
Communication optimization strategies for distributed deep neural
  network training: A survey
Communication optimization strategies for distributed deep neural network training: A survey
Shuo Ouyang
Dezun Dong
Yemao Xu
Liquan Xiao
32
12
0
06 Mar 2020
Taming Unbalanced Training Workloads in Deep Learning with Partial
  Collective Operations
Taming Unbalanced Training Workloads in Deep Learning with Partial Collective Operations
Shigang Li
Tal Ben-Nun
Salvatore Di Girolamo
Dan Alistarh
Torsten Hoefler
11
58
0
12 Aug 2019
Database Meets Deep Learning: Challenges and Opportunities
Database Meets Deep Learning: Challenges and Opportunities
Wei Wang
Meihui Zhang
Gang Chen
H. V. Jagadish
Beng Chin Ooi
K. Tan
24
147
0
21 Jun 2019
MATCHA: Speeding Up Decentralized SGD via Matching Decomposition
  Sampling
MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling
Jianyu Wang
Anit Kumar Sahu
Zhouyi Yang
Gauri Joshi
S. Kar
29
159
0
23 May 2019
Lynceus: Cost-efficient Tuning and Provisioning of Data Analytic Jobs
Lynceus: Cost-efficient Tuning and Provisioning of Data Analytic Jobs
Maria Casimiro
Diego Didona
Paolo Romano
L. Rodrigues
Willy Zwanepoel
David Garlan
22
20
0
06 May 2019
MD-GAN: Multi-Discriminator Generative Adversarial Networks for
  Distributed Datasets
MD-GAN: Multi-Discriminator Generative Adversarial Networks for Distributed Datasets
Corentin Hardy
Erwan Le Merrer
B. Sericola
GAN
27
181
0
09 Nov 2018
Adaptive Communication Strategies to Achieve the Best Error-Runtime
  Trade-off in Local-Update SGD
Adaptive Communication Strategies to Achieve the Best Error-Runtime Trade-off in Local-Update SGD
Jianyu Wang
Gauri Joshi
FedML
33
232
0
19 Oct 2018
Cooperative SGD: A unified Framework for the Design and Analysis of
  Communication-Efficient SGD Algorithms
Cooperative SGD: A unified Framework for the Design and Analysis of Communication-Efficient SGD Algorithms
Jianyu Wang
Gauri Joshi
33
348
0
22 Aug 2018
Parallax: Sparsity-aware Data Parallel Training of Deep Neural Networks
Parallax: Sparsity-aware Data Parallel Training of Deep Neural Networks
Soojeong Kim
Gyeong-In Yu
Hojin Park
Sungwoo Cho
Eunji Jeong
Hyeonmin Ha
Sanha Lee
Joo Seong Jeong
Byung-Gon Chun
23
73
0
08 Aug 2018
Training LSTM Networks with Resistive Cross-Point Devices
Training LSTM Networks with Resistive Cross-Point Devices
Tayfun Gokmen
Malte J. Rasch
W. Haensch
13
45
0
01 Jun 2018
Deep Learning in Mobile and Wireless Networking: A Survey
Deep Learning in Mobile and Wireless Networking: A Survey
Chaoyun Zhang
P. Patras
Hamed Haddadi
50
1,306
0
12 Mar 2018
Slow and Stale Gradients Can Win the Race: Error-Runtime Trade-offs in
  Distributed SGD
Slow and Stale Gradients Can Win the Race: Error-Runtime Trade-offs in Distributed SGD
Sanghamitra Dutta
Gauri Joshi
Soumyadip Ghosh
Parijat Dube
P. Nagpurkar
31
194
0
03 Mar 2018
Demystifying Parallel and Distributed Deep Learning: An In-Depth
  Concurrency Analysis
Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis
Tal Ben-Nun
Torsten Hoefler
GNN
33
704
0
26 Feb 2018
AdaComp : Adaptive Residual Gradient Compression for Data-Parallel
  Distributed Training
AdaComp : Adaptive Residual Gradient Compression for Data-Parallel Distributed Training
Chia-Yu Chen
Jungwook Choi
D. Brand
A. Agrawal
Wei Zhang
K. Gopalakrishnan
ODL
18
173
0
07 Dec 2017
Collaborative Deep Learning in Fixed Topology Networks
Collaborative Deep Learning in Fixed Topology Networks
Zhanhong Jiang
Aditya Balu
Chinmay Hegde
Soumik Sarkar
FedML
32
179
0
23 Jun 2017
Analog CMOS-based Resistive Processing Unit for Deep Neural Network
  Training
Analog CMOS-based Resistive Processing Unit for Deep Neural Network Training
Seyoung Kim
Tayfun Gokmen
Hyung-Min Lee
W. Haensch
14
46
0
20 Jun 2017
Training Deep Convolutional Neural Networks with Resistive Cross-Point
  Devices
Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices
Tayfun Gokmen
M. Onen
W. Haensch
30
140
0
22 May 2017
Deep Learning Convolutional Networks for Multiphoton Microscopy
  Vasculature Segmentation
Deep Learning Convolutional Networks for Multiphoton Microscopy Vasculature Segmentation
Petteri Teikari
Marc A. Santos
Charissa Poon
K. Hynynen
3DV
26
48
0
08 Jun 2016
Staleness-aware Async-SGD for Distributed Deep Learning
Staleness-aware Async-SGD for Distributed Deep Learning
Wei Zhang
Suyog Gupta
Xiangru Lian
Ji Liu
21
266
0
18 Nov 2015
The Effects of Hyperparameters on SGD Training of Neural Networks
The Effects of Hyperparameters on SGD Training of Neural Networks
Thomas Breuel
72
63
0
12 Aug 2015
1