ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.09716
72
1

MoE-Gen: High-Throughput MoE Inference on a Single GPU with Module-Based Batching

12 March 2025
Tairan Xu
Leyang Xue
Zhan Lu
Adrian Jackson
Luo Mai
    MoE
ArXivPDFHTML
Abstract

This paper presents MoE-Gen, a high-throughput MoE inference system optimized for single-GPU execution. Existing inference systems rely on model-based or continuous batching strategies, originally designed for interactive inference, which result in excessively small batches for MoE's key modules-attention and expert modules-leading to poor throughput. To address this, we introduce module-based batching, which accumulates tokens in host memory and dynamically launches large batches on GPUs to maximize utilization. Additionally, we optimize the choice of batch sizes for each module in an MoE to fully overlap GPU computation and communication, maximizing throughput. Evaluation demonstrates that MoE-Gen achieves 8-31x higher throughput compared to state-of-the-art systems employing model-based batching (FlexGen, MoE-Lightning, DeepSpeed), and offers even greater throughput improvements over continuous batching systems (e.g., vLLM and Ollama) on popular MoE models (DeepSeek and Mixtral) across offline inference tasks. MoE-Gen's source code is publicly available atthis https URL

View on arXiv
@article{xu2025_2503.09716,
  title={ MoE-Gen: High-Throughput MoE Inference on a Single GPU with Module-Based Batching },
  author={ Tairan Xu and Leyang Xue and Zhan Lu and Adrian Jackson and Luo Mai },
  journal={arXiv preprint arXiv:2503.09716},
  year={ 2025 }
}
Comments on this paper