ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
  • Feedback
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2509.05755
  4. Cited By
On the Security of Tool-Invocation Prompts for LLM-Based Agentic Systems: An Empirical Risk Assessment
v1v2v3v4 (latest)

On the Security of Tool-Invocation Prompts for LLM-Based Agentic Systems: An Empirical Risk Assessment

6 September 2025
Yuchong Xie
Mingyu Luo
Zesen Liu
Z. Zhang
Kaikai Zhang
Yu Liu
Zongjie Li
Ping Chen
Shuai Wang
Dongdong She
    LLMAG
ArXiv (abs)PDFHTMLGithub (1★)

Papers citing "On the Security of Tool-Invocation Prompts for LLM-Based Agentic Systems: An Empirical Risk Assessment"

Title

No papers found