: Breaking Pragmatic Multi-Agent LLM Systems with Optimized Prompt Attacks

Most discussions about Large Language Model (LLM) safety have focused on single-agent settings but multi-agent LLM systems now create novel adversarial risks because their behavior depends on communication between agents and decentralized reasoning. In this work, we innovatively focus on attacking pragmatic systems that have constrains such as limited token bandwidth, latency between message delivery, and defense mechanisms. We design a that optimizes prompt distribution across latency and bandwidth-constraint network topologies to bypass distributed safety mechanisms within the system. Formulating the attack path as a problem of , coupled with the novel , we leverage graph-based optimization to maximize attack success rate while minimizing detection risk. Evaluating across models including , , , and other variants on various datasets like and , our method outperforms conventional attacks by up to , exposing critical vulnerabilities in multi-agent systems. Moreover, we demonstrate that existing defenses, including variants of and , fail to prohibit our attack, emphasizing the urgent need for multi-agent specific safety mechanisms.
View on arXiv@article{khan2025_2504.00218, title={ $\textit{Agents Under Siege}$: Breaking Pragmatic Multi-Agent LLM Systems with Optimized Prompt Attacks }, author={ Rana Muhammad Shahroz Khan and Zhen Tan and Sukwon Yun and Charles Flemming and Tianlong Chen }, journal={arXiv preprint arXiv:2504.00218}, year={ 2025 } }