ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1811.06965
  4. Cited By
GPipe: Efficient Training of Giant Neural Networks using Pipeline
  Parallelism

GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism

16 November 2018
Yanping Huang
Yonglong Cheng
Ankur Bapna
Orhan Firat
Mia Xu Chen
Dehao Chen
HyoukJoong Lee
Jiquan Ngiam
Quoc V. Le
Yonghui Wu
Zhifeng Chen
    GNN
    MoE
ArXivPDFHTML

Papers citing "GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism"

2 / 2 papers shown
Title
Galvatron: Efficient Transformer Training over Multiple GPUs Using
  Automatic Parallelism
Galvatron: Efficient Transformer Training over Multiple GPUs Using Automatic Parallelism
Xupeng Miao
Yujie Wang
Youhe Jiang
Chunan Shi
Xiaonan Nie
Hailin Zhang
Bin Cui
GNN
MoE
29
60
0
25 Nov 2022
Google's Neural Machine Translation System: Bridging the Gap between
  Human and Machine Translation
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu
M. Schuster
Z. Chen
Quoc V. Le
Mohammad Norouzi
...
Alex Rudnick
Oriol Vinyals
G. Corrado
Macduff Hughes
J. Dean
AIMat
716
6,740
0
26 Sep 2016
1