316

GateBreaker: Gate-Guided Attacks on Mixture-of-Expert LLMs

Lichao Wu
Sasha Behrouzi
Mohamadreza Rostami
Stjepan Picek
Ahmad-Reza Sadeghi
Main:13 Pages
5 Figures
Bibliography:4 Pages
14 Tables
Appendix:2 Pages
Abstract

Mixture-of-Experts (MoE) architectures have advanced the scaling of Large Language Models (LLMs) by activating only a sparse subset of parameters per input, enabling state-of-the-art performance with reduced computational cost. As these models are increasingly deployed in critical domains, understanding and strengthening their alignment mechanisms is essential to prevent harmful outputs. However, existing LLM safety research has focused almost exclusively on dense architectures, leaving the unique safety properties of MoEs largely unexamined. The modular, sparsely-activated design of MoEs suggests that safety mechanisms may operate differently than in dense models, raising questions about their robustness.

View on arXiv
Comments on this paper