ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.06792
20
0

Domain-Specific Pruning of Large Mixture-of-Experts Models with Few-shot Demonstrations

9 April 2025
Zican Dong
Han Peng
Peiyu Liu
Wayne Xin Zhao
Dong Wu
Feng Xiao
Z. Wang
    MoE
ArXivPDFHTML
Abstract

Mixture-of-Experts (MoE) models achieve a favorable trade-off between performance and inference efficiency by activating only a subset of experts. However, the memory overhead of storing all experts remains a major limitation, especially in large-scale MoE models such as DeepSeek-R1 (671B). In this study, we investigate domain specialization and expert redundancy in large-scale MoE models and uncover a consistent behavior we term few-shot expert localization, with only a few demonstrations, the model consistently activates a sparse and stable subset of experts. Building on this observation, we propose a simple yet effective pruning framework, EASY-EP, that leverages a few domain-specific demonstrations to identify and retain only the most relevant experts. EASY-EP comprises two key components: output-aware expert importance assessment and expert-level token contribution estimation. The former evaluates the importance of each expert for the current token by considering the gating scores and magnitudes of the outputs of activated experts, while the latter assesses the contribution of tokens based on representation similarities after and before routed experts. Experiments show that our method can achieve comparable performances and 2.99×2.99\times2.99× throughput under the same memory budget with full DeepSeek-R1 with only half the experts. Our code is available atthis https URL.

View on arXiv
@article{dong2025_2504.06792,
  title={ Domain-Specific Pruning of Large Mixture-of-Experts Models with Few-shot Demonstrations },
  author={ Zican Dong and Han Peng and Peiyu Liu and Wayne Xin Zhao and Dong Wu and Feng Xiao and Zhifeng Wang },
  journal={arXiv preprint arXiv:2504.06792},
  year={ 2025 }
}
Comments on this paper