ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
  • Feedback
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2509.05755
4
0
v1v2v3v4 (latest)

Exploit Tool Invocation Prompt for Tool Behavior Hijacking in LLM-Based Agentic System

6 September 2025
Yuchong Xie
Mingyu Luo
Zesen Liu
Z. Zhang
Kaikai Zhang
Yu Liu
Zongjie Li
Ping Chen
Shuai Wang
Dongdong She
    LLMAG
ArXiv (abs)PDFHTMLGithub (1★)
Main:18 Pages
7 Figures
Bibliography:3 Pages
6 Tables
Appendix:4 Pages
Abstract

LLM-based agentic systems leverage large language models to handle user queries, make decisions, and execute external tools for complex tasks across domains like chatbots, customer service, and software engineering. A critical component of these systems is the Tool Invocation Prompt (TIP), which defines tool interaction protocols and guides LLMs to ensure the security and correctness of tool usage. Despite its importance, TIP security has been largely overlooked. This work investigates TIP-related security risks, revealing that major LLM-based systems like Cursor, Claude Code, and others are vulnerable to attacks such as remote code execution (RCE) and denial of service (DoS). Through a systematic TIP exploitation workflow (TEW), we demonstrate external tool behavior hijacking via manipulated tool invocations. We also propose defense mechanisms to enhance TIP security in LLM-based agentic systems.

View on arXiv
Comments on this paper