79
2

Agents Under Siege\textit{Agents Under Siege}: Breaking Pragmatic Multi-Agent LLM Systems with Optimized Prompt Attacks

Abstract

Most discussions about Large Language Model (LLM) safety have focused on single-agent settings but multi-agent LLM systems now create novel adversarial risks because their behavior depends on communication between agents and decentralized reasoning. In this work, we innovatively focus on attacking pragmatic systems that have constrains such as limited token bandwidth, latency between message delivery, and defense mechanisms. We design a permutation-invariant adversarial attack\textit{permutation-invariant adversarial attack} that optimizes prompt distribution across latency and bandwidth-constraint network topologies to bypass distributed safety mechanisms within the system. Formulating the attack path as a problem of maximum-flow minimum-cost\textit{maximum-flow minimum-cost}, coupled with the novel Permutation-Invariant Evasion Loss (PIEL)\textit{Permutation-Invariant Evasion Loss (PIEL)}, we leverage graph-based optimization to maximize attack success rate while minimizing detection risk. Evaluating across models including Llama\texttt{Llama}, Mistral\texttt{Mistral}, Gemma\texttt{Gemma}, DeepSeek\texttt{DeepSeek} and other variants on various datasets like JailBreakBench\texttt{JailBreakBench} and AdversarialBench\texttt{AdversarialBench}, our method outperforms conventional attacks by up to 7×7\times, exposing critical vulnerabilities in multi-agent systems. Moreover, we demonstrate that existing defenses, including variants of Llama-Guard\texttt{Llama-Guard} and PromptGuard\texttt{PromptGuard}, fail to prohibit our attack, emphasizing the urgent need for multi-agent specific safety mechanisms.

View on arXiv
@article{khan2025_2504.00218,
  title={ $\textit{Agents Under Siege}$: Breaking Pragmatic Multi-Agent LLM Systems with Optimized Prompt Attacks },
  author={ Rana Muhammad Shahroz Khan and Zhen Tan and Sukwon Yun and Charles Flemming and Tianlong Chen },
  journal={arXiv preprint arXiv:2504.00218},
  year={ 2025 }
}
Comments on this paper