Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1805.04170
Cited By
Unifying Data, Model and Hybrid Parallelism in Deep Learning via Tensor Tiling
10 May 2018
Minjie Wang
Chien-chin Huang
Jinyang Li
FedML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Unifying Data, Model and Hybrid Parallelism in Deep Learning via Tensor Tiling"
8 / 8 papers shown
Title
A Survey From Distributed Machine Learning to Distributed Deep Learning
Mohammad Dehghani
Zahra Yazdanparast
15
0
0
11 Jul 2023
Optimus-CC: Efficient Large NLP Model Training with 3D Parallelism Aware Communication Compression
Jaeyong Song
Jinkyu Yim
Jaewon Jung
Hongsun Jang
H. Kim
Youngsok Kim
Jinho Lee
GNN
8
25
0
24 Jan 2023
ALT: Boosting Deep Learning Performance by Breaking the Wall between Graph and Operator Level Optimizations
Zhiying Xu
Jiafan Xu
H. Peng
Wei Wang
Xiaoliang Wang
...
Haipeng Dai
Yixu Xu
Hao Cheng
Kun Wang
Guihai Chen
18
0
0
22 Oct 2022
A Hybrid Parallelization Approach for Distributed and Scalable Deep Learning
S. Akintoye
Liangxiu Han
Xin Zhang
Haoming Chen
Daoqiang Zhang
8
14
0
11 Apr 2021
Hybrid Data-Model Parallel Training for Sequence-to-Sequence Recurrent Neural Network Machine Translation
Junya Ono
Masao Utiyama
Eiichiro Sumita
AIMat
AI4CE
11
7
0
02 Sep 2019
HyPar: Towards Hybrid Parallelism for Deep Learning Accelerator Array
Linghao Song
Jiachen Mao
Youwei Zhuo
Xuehai Qian
Hai Helen Li
Yiran Chen
9
97
0
07 Jan 2019
Parallax: Sparsity-aware Data Parallel Training of Deep Neural Networks
Soojeong Kim
Gyeong-In Yu
Hojin Park
Sungwoo Cho
Eunji Jeong
Hyeonmin Ha
Sanha Lee
Joo Seong Jeong
Byung-Gon Chun
15
73
0
08 Aug 2018
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
273
2,886
0
15 Sep 2016
1