ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2204.10562
  4. Cited By
Efficient Pipeline Planning for Expedited Distributed DNN Training

Efficient Pipeline Planning for Expedited Distributed DNN Training

22 April 2022
Ziyue Luo
Xiaodong Yi
Guoping Long
Shiqing Fan
Chuan Wu
Jun Yang
Wei Lin
ArXivPDFHTML

Papers citing "Efficient Pipeline Planning for Expedited Distributed DNN Training"

7 / 7 papers shown
Title
Prediction-Assisted Online Distributed Deep Learning Workload Scheduling in GPU Clusters
Prediction-Assisted Online Distributed Deep Learning Workload Scheduling in GPU Clusters
Ziyue Luo
Jia-Wei Liu
Myungjin Lee
Ness B. Shroff
39
0
0
09 Jan 2025
Acceleration for Deep Reinforcement Learning using Parallel and
  Distributed Computing: A Survey
Acceleration for Deep Reinforcement Learning using Parallel and Distributed Computing: A Survey
Zhihong Liu
Xin Xu
Peng Qiao
Dongsheng Li
OffRL
20
2
0
08 Nov 2024
Asteroid: Resource-Efficient Hybrid Pipeline Parallelism for
  Collaborative DNN Training on Heterogeneous Edge Devices
Asteroid: Resource-Efficient Hybrid Pipeline Parallelism for Collaborative DNN Training on Heterogeneous Edge Devices
Shengyuan Ye
Liekang Zeng
Xiaowen Chu
Guoliang Xing
Xu Chen
33
11
0
15 Aug 2024
DiffusionPipe: Training Large Diffusion Models with Efficient Pipelines
DiffusionPipe: Training Large Diffusion Models with Efficient Pipelines
Ye Tian
Zhen Jia
Ziyue Luo
Yida Wang
Chuan Wu
AI4CE
23
2
0
02 May 2024
Practical Performance Guarantees for Pipelined DNN Inference
Practical Performance Guarantees for Pipelined DNN Inference
Aaron Archer
Matthew Fahrbach
Kuikui Liu
Prakash Prabhu
23
0
0
07 Nov 2023
Cloud-Native Computing: A Survey from the Perspective of Services
Cloud-Native Computing: A Survey from the Perspective of Services
Shuiguang Deng
Hailiang Zhao
Binbin Huang
Cheng Zhang
Feiyi Chen
Yinuo Deng
Jianwei Yin
Schahram Dustdar
Albert Y. Zomaya
AI4TS
28
17
0
26 Jun 2023
Megatron-LM: Training Multi-Billion Parameter Language Models Using
  Model Parallelism
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
M. Shoeybi
M. Patwary
Raul Puri
P. LeGresley
Jared Casper
Bryan Catanzaro
MoE
245
1,817
0
17 Sep 2019
1