ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.04656
  4. Cited By
Lazarus: Resilient and Elastic Training of Mixture-of-Experts Models
v1v2 (latest)

Lazarus: Resilient and Elastic Training of Mixture-of-Experts Models

5 July 2024
Yongji Wu
Wenjie Qu
Xueshen Liu
Tianyang Tao
Wei Bai
Zhuang Wang
Wei Bai
Jiaheng Zhang
Jiaheng Zhang
Z. Morley Mao
Matthew Lentz
Danyang Zhuo
Ion Stoica
ArXiv (abs)PDFHTMLGithub (25143★)

Papers citing "Lazarus: Resilient and Elastic Training of Mixture-of-Experts Models"

2 / 2 papers shown
RLBoost: Harvesting Preemptible Resources for Cost-Efficient Reinforcement Learning on LLMs
RLBoost: Harvesting Preemptible Resources for Cost-Efficient Reinforcement Learning on LLMs
Yongji Wu
Xueshen Liu
Haizhong Zheng
Juncheng Gu
Beidi Chen
Z. Morley Mao
Arvind Krishnamurthy
Eric Liang
OffRLSILMOnRL
392
1
0
22 Oct 2025
HeterMoE: Efficient Training of Mixture-of-Experts Models on Heterogeneous GPUs
HeterMoE: Efficient Training of Mixture-of-Experts Models on Heterogeneous GPUs
Yongji Wu
Xueshen Liu
Shuowei Jin
Ceyu Xu
Feng Qian
Ron Yifeng Wang
Matthew Lentz
Danyang Zhuo
Ion Stoica
MoMeMoE
289
5
0
04 Apr 2025
1
Page 1 of 1