ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.08966
  4. Cited By
RTBAS: Defending LLM Agents Against Prompt Injection and Privacy Leakage

RTBAS: Defending LLM Agents Against Prompt Injection and Privacy Leakage

17 February 2025
Peter Yong Zhong
Siyuan Chen
Ruiqi Wang
McKenna McCall
Ben L. Titzer
Heather Miller
Phillip B. Gibbons
    LLMAG
ArXivPDFHTML

Papers citing "RTBAS: Defending LLM Agents Against Prompt Injection and Privacy Leakage"

2 / 2 papers shown
Title
Robustness via Referencing: Defending against Prompt Injection Attacks by Referencing the Executed Instruction
Robustness via Referencing: Defending against Prompt Injection Attacks by Referencing the Executed Instruction
Y. Chen
Haoran Li
Yuan Sui
Y. Liu
Yufei He
Y. Song
Bryan Hooi
AAML
SILM
61
0
0
29 Apr 2025
A Survey of WebAgents: Towards Next-Generation AI Agents for Web Automation with Large Foundation Models
A Survey of WebAgents: Towards Next-Generation AI Agents for Web Automation with Large Foundation Models
Liangbo Ning
Ziran Liang
Zhuohang Jiang
Haohao Qu
Yujuan Ding
...
Xiao Wei
Shanru Lin
Hui Liu
Philip S. Yu
Qing Li
LLMAG
LM&Ro
75
5
0
30 Mar 2025
1