45
0

MxMoE: Mixed-precision Quantization for MoE with Accuracy and Performance Co-Design

Abstract

Mixture-of-Experts (MoE) models face deployment challenges due to their large parameter counts and computational demands. We explore quantization for MoE models and highlight two key insights: 1) linear blocks exhibit varying quantization sensitivity, and 2) divergent expert activation frequencies create heterogeneous computational characteristics. Based on these observations, we introduce MxMoE, a mixed-precision optimization framework for MoE models that considers both algorithmic and system perspectives. MxMoE navigates the design space defined by parameter sensitivity, expert activation dynamics, and hardware resources to derive efficient mixed-precision configurations. Additionally, MxMoE automatically generates optimized mixed-precision GroupGEMM kernels, enabling parallel execution of GEMMs with different precisions. Evaluations show that MxMoE outperforms existing methods, achieving 2.4 lower Wikitext-2 perplexity than GPTQ at 2.25-bit and delivering up to 3.4x speedup over full precision, as well as up to 29.4% speedup over uniform quantization at equivalent accuracy with 5-bit weight-activation quantization. Our code is available atthis https URL.

View on arXiv
@article{duanmu2025_2505.05799,
  title={ MxMoE: Mixed-precision Quantization for MoE with Accuracy and Performance Co-Design },
  author={ Haojie Duanmu and Xiuhong Li and Zhihang Yuan and Size Zheng and Jiangfei Duan and Xingcheng Zhang and Dahua Lin },
  journal={arXiv preprint arXiv:2505.05799},
  year={ 2025 }
}
Comments on this paper