Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2407.04656
Cited By
v1
v2 (latest)
Lazarus: Resilient and Elastic Training of Mixture-of-Experts Models
5 July 2024
Yongji Wu
Wenjie Qu
Xueshen Liu
Tianyang Tao
Wei Bai
Zhuang Wang
Wei Bai
Jiaheng Zhang
Jiaheng Zhang
Z. Morley Mao
Matthew Lentz
Danyang Zhuo
Ion Stoica
Re-assign community
ArXiv (abs)
PDF
HTML
Github (25143★)
Papers citing
"Lazarus: Resilient and Elastic Training of Mixture-of-Experts Models"
2 / 2 papers shown
RLBoost: Harvesting Preemptible Resources for Cost-Efficient Reinforcement Learning on LLMs
Yongji Wu
Xueshen Liu
Haizhong Zheng
Juncheng Gu
Beidi Chen
Z. Morley Mao
Arvind Krishnamurthy
Eric Liang
OffRL
SILM
OnRL
392
1
0
22 Oct 2025
HeterMoE: Efficient Training of Mixture-of-Experts Models on Heterogeneous GPUs
Yongji Wu
Xueshen Liu
Shuowei Jin
Ceyu Xu
Feng Qian
Ron Yifeng Wang
Matthew Lentz
Danyang Zhuo
Ion Stoica
MoMe
MoE
289
5
0
04 Apr 2025
1
Page 1 of 1