ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.07283
  4. Cited By
Prompt Infection: LLM-to-LLM Prompt Injection within Multi-Agent Systems

Prompt Infection: LLM-to-LLM Prompt Injection within Multi-Agent Systems

9 October 2024
Donghyun Lee
Mo Tiwari
    LLMAG
ArXivPDFHTML

Papers citing "Prompt Infection: LLM-to-LLM Prompt Injection within Multi-Agent Systems"

2 / 2 papers shown
Title
Open Challenges in Multi-Agent Security: Towards Secure Systems of Interacting AI Agents
Open Challenges in Multi-Agent Security: Towards Secure Systems of Interacting AI Agents
Christian Schroeder de Witt
AAML
AI4CE
46
0
0
04 May 2025
Prompt Injection Attack to Tool Selection in LLM Agents
Prompt Injection Attack to Tool Selection in LLM Agents
Jiawen Shi
Zenghui Yuan
Guiyao Tie
Pan Zhou
Neil Zhenqiang Gong
Lichao Sun
LLMAG
51
0
0
28 Apr 2025
1