ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2510.05497
123
0
v1v2 (latest)

Orders in Chaos: Enhancing Large-Scale MoE LLM Serving with Data Movement Forecasting

7 October 2025
Zhongkai Yu
Yue Guan
Zihao Yu
Chenyang Zhou
Shuyi Pei
Yangwook Kang
Yufei Ding
Po-An Tsai
    MoE
ArXiv (abs)PDFHTML
Main:11 Pages
15 Figures
Bibliography:3 Pages
2 Tables
Abstract

Large Language Models (LLMs) with Mixture of Experts (MoE) architectures achieve remarkable performance improvements, but their random expert selection mechanism introduces significant data movement overhead that becomes the dominant bottleneck in multi-unit serving systems. To forecast the patterns underlying this data movement, we conduct comprehensive data-movement-centric profiling across three state-of-the-art large-scale MoE models (200B- 671B) using over 24,000 requests spanning diverse workloads. With the resulting 150GB+ trace files, we perform systematic analysis from both temporal and spatial perspectives and distill six key insights to guide the design of diverse future serving systems. Taking wafer-scale GPUs as a case study, we demonstrate that minor architectural modifications leveraging our insights achieve substantial performance gains, delivering 6.3X and 4.0X average speedups on DeepSeek V3 and Qwen3, respectively. Our work provides the first comprehensive data-centric analysis of MoE models at scale. Our profiling traces and analysis results are publicly available at {this https URL. We will also release our simulation framework shortly to facilitate future research in this area.

View on arXiv
Comments on this paper