Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2401.00991
Cited By
A Novel Evaluation Framework for Assessing Resilience Against Prompt Injection Attacks in Large Language Models
2 January 2024
Daniel Wankit Yip
Aysan Esmradi
C. Chan
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"A Novel Evaluation Framework for Assessing Resilience Against Prompt Injection Attacks in Large Language Models"
9 / 9 papers shown
Title
Security Threats in Agentic AI System
Raihan Khan
Sayak Sarkar
Sainik Kumar Mahata
Edwin Jose
21
4
0
16 Oct 2024
System-Level Defense against Indirect Prompt Injection Attacks: An Information Flow Control Perspective
Fangzhou Wu
Ethan Cecchetti
Chaowei Xiao
39
12
0
27 Sep 2024
Applying Pre-trained Multilingual BERT in Embeddings for Improved Malicious Prompt Injection Attacks Detection
M. Rahman
Hossain Shahriar
Fan Wu
A. Cuzzocrea
AAML
36
4
0
20 Sep 2024
Automatic and Universal Prompt Injection Attacks against Large Language Models
Xiaogeng Liu
Zhiyuan Yu
Yizhe Zhang
Ning Zhang
Chaowei Xiao
SILM
AAML
38
33
0
07 Mar 2024
A New Era in LLM Security: Exploring Security Concerns in Real-World LLM-based Systems
Fangzhou Wu
Ning Zhang
Somesh Jha
P. McDaniel
Chaowei Xiao
32
68
0
28 Feb 2024
WIPI: A New Web Threat for LLM-Driven Web Agents
Fangzhou Wu
Shutong Wu
Yulong Cao
Chaowei Xiao
LLMAG
32
17
0
26 Feb 2024
COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability
Xing-ming Guo
Fangxu Yu
Huan Zhang
Lianhui Qin
Bin Hu
AAML
114
69
0
13 Feb 2024
StruQ: Defending Against Prompt Injection with Structured Queries
Sizhe Chen
Julien Piet
Chawin Sitawarin
David A. Wagner
SILM
AAML
22
65
0
09 Feb 2024
GLM-130B: An Open Bilingual Pre-trained Model
Aohan Zeng
Xiao Liu
Zhengxiao Du
Zihan Wang
Hanyu Lai
...
Jidong Zhai
Wenguang Chen
Peng-Zhen Zhang
Yuxiao Dong
Jie Tang
BDL
LRM
245
1,071
0
05 Oct 2022
1