ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.12544
  4. Cited By
BaPipe: Exploration of Balanced Pipeline Parallelism for DNN Training
v1v2 (latest)

BaPipe: Exploration of Balanced Pipeline Parallelism for DNN Training

23 December 2020
Letian Zhao
Rui Xu
Tianqi Wang
Teng Tian
Xiaotian Wang
Wei Wu
Chio-in Ieong
Xi Jin
    MoE
ArXiv (abs)PDFHTML

Papers citing "BaPipe: Exploration of Balanced Pipeline Parallelism for DNN Training"

3 / 3 papers shown
SpikePipe: Accelerated Training of Spiking Neural Networks via
  Inter-Layer Pipelining and Multiprocessor Scheduling
SpikePipe: Accelerated Training of Spiking Neural Networks via Inter-Layer Pipelining and Multiprocessor Scheduling
Sai Sanjeet
B. Sahoo
Keshab K. Parhi
212
4
0
11 Jun 2024
FTPipeHD: A Fault-Tolerant Pipeline-Parallel Distributed Training
  Framework for Heterogeneous Edge Devices
FTPipeHD: A Fault-Tolerant Pipeline-Parallel Distributed Training Framework for Heterogeneous Edge Devices
Yuhao Chen
Qianqian Yang
Shibo He
Zhiguo Shi
Jiming Chen
201
4
0
06 Oct 2021
LayerPipe: Accelerating Deep Neural Network Training by Intra-Layer and
  Inter-Layer Gradient Pipelining and Multiprocessor Scheduling
LayerPipe: Accelerating Deep Neural Network Training by Intra-Layer and Inter-Layer Gradient Pipelining and Multiprocessor Scheduling
Nanda K. Unnikrishnan
Keshab K. Parhi
AI4CE
133
10
0
14 Aug 2021
1
Page 1 of 1