ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.12013
  4. Cited By
MoE-Pruner: Pruning Mixture-of-Experts Large Language Model using the
  Hints from Its Router

MoE-Pruner: Pruning Mixture-of-Experts Large Language Model using the Hints from Its Router

15 October 2024
Yanyue Xie
Zhi Zhang
Ding Zhou
Cong Xie
Ziang Song
Xin Liu
Yanzhi Wang
Xue Lin
An Xu
    LLMAG
ArXivPDFHTML

Papers citing "MoE-Pruner: Pruning Mixture-of-Experts Large Language Model using the Hints from Its Router"

1 / 1 papers shown
Title
Faster MoE LLM Inference for Extremely Large Models
Faster MoE LLM Inference for Extremely Large Models
Haoqi Yang
Luohe Shi
Qiwei Li
Zuchao Li
Ping Wang
Bo Du
Mengjia Shen
Hai Zhao
MoE
59
0
0
06 May 2025
1