ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.01366
  4. Cited By
CHESS: Optimizing LLM Inference via Channel-Wise Thresholding and
  Selective Sparsification

CHESS: Optimizing LLM Inference via Channel-Wise Thresholding and Selective Sparsification

2 September 2024
Junhui He
Shangyu Wu
Weidong Wen
Chun Jason Xue
Qingan Li
ArXivPDFHTML

Papers citing "CHESS: Optimizing LLM Inference via Channel-Wise Thresholding and Selective Sparsification"

2 / 2 papers shown
Title
FloE: On-the-Fly MoE Inference on Memory-constrained GPU
FloE: On-the-Fly MoE Inference on Memory-constrained GPU
Yuxin Zhou
Zheng Li
J. Zhang
Jue Wang
Y. Wang
Zhongle Xie
Ke Chen
Lidan Shou
MoE
43
0
0
09 May 2025
Faster MoE LLM Inference for Extremely Large Models
Faster MoE LLM Inference for Extremely Large Models
Haoqi Yang
Luohe Shi
Qiwei Li
Zuchao Li
Ping Wang
Bo Du
Mengjia Shen
Hai Zhao
MoE
61
0
0
06 May 2025
1