14

Agent-Fence: Mapping Security Vulnerabilities Across Deep Research Agents

Sai Puppala
Ismail Hossain
Md Jahangir Alam
Yoonpyo Lee
Jay Yoo
Tanzim Ahad
Syed Bahauddin Alam
Sajedul Talukder
Main:4 Pages
5 Figures
5 Tables
Appendix:7 Pages
Abstract

Large language models are increasingly deployed as *deep agents* that plan, maintain persistent state, and invoke external tools, shifting safety failures from unsafe text to unsafe *trajectories*. We introduce **AgentFence**, an architecture-centric security evaluation that defines 14 trust-boundary attack classes spanning planning, memory, retrieval, tool use, and delegation, and detects failures via *trace-auditable conversation breaks* (unauthorized or unsafe tool use, wrong-principal actions, state/objective integrity violations, and attack-linked deviations). Holding the base model fixed, we evaluate eight agent archetypes under persistent multi-turn interaction and observe substantial architectural variation in mean security break rate (MSBR), ranging from 0.29±0.040.29 \pm 0.04 (LangGraph) to 0.51±0.070.51 \pm 0.07 (AutoGPT). The highest-risk classes are operational: Denial-of-Wallet (0.62±0.080.62 \pm 0.08), Authorization Confusion (0.54±0.100.54 \pm 0.10), Retrieval Poisoning (0.47±0.090.47 \pm 0.09), and Planning Manipulation (0.44±0.110.44 \pm 0.11), while prompt-centric classes remain below 0.200.20 under standard settings. Breaks are dominated by boundary violations (SIV 31%, WPA 27%, UTI+UTA 24%, ATD 18%), and authorization confusion correlates with objective and tool hijacking (ρ0.63\rho \approx 0.63 and ρ0.58\rho \approx 0.58). AgentFence reframes agent security around what matters operationally: whether an agent stays within its goal and authority envelope over time.

View on arXiv
Comments on this paper