Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2402.05526
Cited By
Buffer Overflow in Mixture of Experts
8 February 2024
Jamie Hayes
Ilia Shumailov
Itay Yona
MoE
Re-assign community
ArXiv (abs)
PDF
HTML
HuggingFace (8 upvotes)
Papers citing
"Buffer Overflow in Mixture of Experts"
6 / 6 papers shown
SAFEx: Analyzing Vulnerabilities of MoE-Based LLMs via Stable Safety-critical Expert Identification
ZhengLin Lai
MengYao Liao
Bingzhe Wu
Dong Xu
Zebin Zhao
Zhihang Yuan
Chao Fan
Jianqiang Li
MoE
195
2
0
20 Jun 2025
BadMoE: Backdooring Mixture-of-Experts LLMs via Optimizing Routing Triggers and Infecting Dormant Experts
Qingyue Wang
Qi Pang
Xixun Lin
Shuai Wang
Daoyuan Wu
MoE
335
5
0
24 Apr 2025
Capacity-Aware Inference: Mitigating the Straggler Effect in Mixture of Experts
Shwai He
Weilin Cai
Jiayi Huang
Ang Li
MoE
497
3
0
07 Mar 2025
Stealing User Prompts from Mixture of Experts
Itay Yona
Ilia Shumailov
Jamie Hayes
Nicholas Carlini
MoE
180
6
0
30 Oct 2024
LLMmap: Fingerprinting For Large Language Models
Dario Pasquini
Evgenios M. Kornaropoulos
G. Ateniese
511
24
0
22 Jul 2024
Operationalizing a Threat Model for Red-Teaming Large Language Models (LLMs)
Apurv Verma
Satyapriya Krishna
Sebastian Gehrmann
Madhavan Seshadri
Anu Pradhan
Tom Ault
Leslie Barrett
David Rabinowitz
John Doucette
Nhathai Phan
437
42
0
20 Jul 2024
1
Page 1 of 1