ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2507.02735
  4. Cited By
Meta SecAlign: A Secure Foundation LLM Against Prompt Injection Attacks
v1v2 (latest)

Meta SecAlign: A Secure Foundation LLM Against Prompt Injection Attacks

3 July 2025
Sizhe Chen
Arman Zharmagambetov
David Wagner
Chuan Guo
    AAML
ArXiv (abs)PDFHTMLGithub (1579★)

Papers citing "Meta SecAlign: A Secure Foundation LLM Against Prompt Injection Attacks"

19 / 19 papers shown
Title
Mitigating Indirect Prompt Injection via Instruction-Following Intent Analysis
Mintong Kang
Chong Xiang
Sanjay Kariyappa
Chaowei Xiao
Bo Li
Edward Suh
SILMAAML
310
0
0
30 Nov 2025
BrowseSafe: Understanding and Preventing Prompt Injection Within AI Browser Agents
BrowseSafe: Understanding and Preventing Prompt Injection Within AI Browser Agents
Kaiyuan Zhang
Mark Tenenholtz
Kyle Polley
Jerry Ma
Denis Yarats
Ninghui Li
SILM
562
0
0
25 Nov 2025
EAGER: Edge-Aligned LLM Defense for Robust, Efficient, and Accurate Cybersecurity Question Answering
EAGER: Edge-Aligned LLM Defense for Robust, Efficient, and Accurate Cybersecurity Question Answering
Onat Gungor
Roshan Sood
Jiasheng Zhou
T. Rosing
AAML
45
0
0
24 Nov 2025
Taxonomy, Evaluation and Exploitation of IPI-Centric LLM Agent Defense Frameworks
Taxonomy, Evaluation and Exploitation of IPI-Centric LLM Agent Defense Frameworks
Zimo Ji
Xunguang Wang
Zongjie Li
Pingchuan Ma
Yudong Gao
Daoyuan Wu
Xincheng Yan
Tian Tian
Shuai Wang
LLMAGAAML
301
0
0
19 Nov 2025
DRIP: Defending Prompt Injection via Token-wise Representation Editing and Residual Instruction Fusion
DRIP: Defending Prompt Injection via Token-wise Representation Editing and Residual Instruction Fusion
Ruofan Liu
Yun Lin
Zhiyong Huang
Jin Song Dong
AAMLSILM
334
0
0
01 Nov 2025
Defending Against Prompt Injection with DataFilter
Defending Against Prompt Injection with DataFilter
Yizhu Wang
Sizhe Chen
Raghad Alkhudair
Basel Alomair
David Wagner
AAML
200
2
0
22 Oct 2025
Breaking and Fixing Defenses Against Control-Flow Hijacking in Multi-Agent Systems
Breaking and Fixing Defenses Against Control-Flow Hijacking in Multi-Agent Systems
Rishi Jha
Harold Triedman
Justin Wagle
Vitaly Shmatikov
AAML
129
1
0
20 Oct 2025
PIShield: Detecting Prompt Injection Attacks via Intrinsic LLM Features
PIShield: Detecting Prompt Injection Attacks via Intrinsic LLM Features
Wei Zou
Yupei Liu
Yanting Wang
Ying Chen
Neil Zhenqiang Gong
Jinyuan Jia
AAML
194
0
0
15 Oct 2025
The Attacker Moves Second: Stronger Adaptive Attacks Bypass Defenses Against Llm Jailbreaks and Prompt Injections
The Attacker Moves Second: Stronger Adaptive Attacks Bypass Defenses Against Llm Jailbreaks and Prompt Injections
Milad Nasr
Nicholas Carlini
Chawin Sitawarin
Sander Schulhoff
Jamie Hayes
...
Ilia Shumailov
Abhradeep Thakurta
Kai Yuanqing Xiao
Seth Neel
F. Tramèr
AAMLELM
175
12
0
10 Oct 2025
CommandSans: Securing AI Agents with Surgical Precision Prompt Sanitization
CommandSans: Securing AI Agents with Surgical Precision Prompt Sanitization
Debeshee Das
Luca Beurer-Kellner
Marc Fischer
Maximilian Baader
AAML
145
0
0
09 Oct 2025
RL Is a Hammer and LLMs Are Nails: A Simple Reinforcement Learning Recipe for Strong Prompt Injection
RL Is a Hammer and LLMs Are Nails: A Simple Reinforcement Learning Recipe for Strong Prompt Injection
Yuxin Wen
Arman Zharmagambetov
Ivan Evtimov
Narine Kokhlikyan
Tom Goldstein
Kamalika Chaudhuri
Chuan Guo
OffRLSILM
159
5
0
06 Oct 2025
Better Privilege Separation for Agents by Restricting Data Types
Better Privilege Separation for Agents by Restricting Data Types
Dennis Jacob
Emad Alghamdi
Zhanhao Hu
Basel Alomair
David Wagner
AAML
80
0
0
30 Sep 2025
SecInfer: Preventing Prompt Injection via Inference-time Scaling
SecInfer: Preventing Prompt Injection via Inference-time Scaling
Yupei Liu
Yanting Wang
Yuqi Jia
Jinyuan Jia
Neil Zhenqiang Gong
LRMSILMAAML
417
3
0
29 Sep 2025
Automatic Red Teaming LLM-based Agents with Model Context Protocol Tools
Automatic Red Teaming LLM-based Agents with Model Context Protocol Tools
Ping He
Changjiang Li
Xingshuang Lin
Xuhong Zhang
R. Beyah
LLMAGAAML
146
1
0
25 Sep 2025
The Sum Leaks More Than Its Parts: Compositional Privacy Risks and Mitigations in Multi-Agent Collaboration
The Sum Leaks More Than Its Parts: Compositional Privacy Risks and Mitigations in Multi-Agent Collaboration
Vaidehi Patil
Elias Stengel-Eskin
Mohit Bansal
137
1
0
16 Sep 2025
Evaluating the Robustness of Retrieval-Augmented Generation to Adversarial Evidence in the Health Domain
Evaluating the Robustness of Retrieval-Augmented Generation to Adversarial Evidence in the Health Domain
Shakiba Amirshahi
Amin Bigdeli
Charles L. A. Clarke
Amira Ghenai
AAML
104
1
0
04 Sep 2025
Defending Against Prompt Injection With a Few DefensiveTokens
Defending Against Prompt Injection With a Few DefensiveTokens
Sizhe Chen
Yizhu Wang
Nicholas Carlini
Chawin Sitawarin
David Wagner
LLMAGAAMLSILM
201
12
0
10 Jul 2025
RedTeamCUA: Realistic Adversarial Testing of Computer-Use Agents in Hybrid Web-OS Environments
RedTeamCUA: Realistic Adversarial Testing of Computer-Use Agents in Hybrid Web-OS Environments
Zeyi Liao
Jaylen Jones
Linxi Jiang
Eric Fosler-Lussier
Eric Fosler-Lussier
Yu-Chuan Su
Zhiqiang Lin
Huan Sun
ELM
395
10
0
28 May 2025
Progent: Programmable Privilege Control for LLM Agents
Progent: Programmable Privilege Control for LLM Agents
Tianneng Shi
Jingxuan He
Yu Yang
Hongwei Li
Linyu Wu
Wenbo Guo
Dawn Song
LLMAG
270
26
0
16 Apr 2025
1