ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.08005
  4. Cited By
Slapo: A Schedule Language for Progressive Optimization of Large Deep
  Learning Model Training

Slapo: A Schedule Language for Progressive Optimization of Large Deep Learning Model Training

16 February 2023
Hongzheng Chen
Cody Hao Yu
Shuai Zheng
Zhen Zhang
Zhiru Zhang
Yida Wang
ArXivPDFHTML

Papers citing "Slapo: A Schedule Language for Progressive Optimization of Large Deep Learning Model Training"

3 / 3 papers shown
Title
Allo: A Programming Model for Composable Accelerator Design
Allo: A Programming Model for Composable Accelerator Design
Hongzheng Chen
Niansong Zhang
Shaojie Xiang
Zhichen Zeng
Mengjia Dai
Zhiru Zhang
38
14
0
07 Apr 2024
RAF: Holistic Compilation for Deep Learning Model Training
RAF: Holistic Compilation for Deep Learning Model Training
Cody Hao Yu
Haozheng Fan
Guangtai Huang
Zhen Jia
Yizhi Liu
...
Yuan Zhou
Haichen Shen
Junru Shao
Mu Li
Yida Wang
15
3
0
08 Mar 2023
Megatron-LM: Training Multi-Billion Parameter Language Models Using
  Model Parallelism
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
M. Shoeybi
M. Patwary
Raul Puri
P. LeGresley
Jared Casper
Bryan Catanzaro
MoE
243
1,791
0
17 Sep 2019
1