Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2306.05499
Cited By
Prompt Injection attack against LLM-integrated Applications
8 June 2023
Yi Liu
Gelei Deng
Yuekang Li
Kailong Wang
Zihao Wang
XiaoFeng Wang
Tianwei Zhang
Yepang Liu
Haoyu Wang
Yanhong Zheng
Yang Liu
SILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Prompt Injection attack against LLM-integrated Applications"
20 / 220 papers shown
Title
Image Hijacks: Adversarial Images can Control Generative Models at Runtime
Luke Bailey
Euan Ong
Stuart J. Russell
Scott Emmons
VLM
MLLM
16
78
0
01 Sep 2023
Large Language Models for Software Engineering: A Systematic Literature Review
Xinying Hou
Yanjie Zhao
Yue Liu
Zhou Yang
Kailong Wang
Li Li
Xiapu Luo
David Lo
John C. Grundy
Haoyu Wang
30
322
0
21 Aug 2023
Self-Deception: Reverse Penetrating the Semantic Firewall of Large Language Models
Zhenhua Wang
Wei Xie
Kai Chen
Baosheng Wang
Zhiwen Gui
Enze Wang
AAML
SILM
20
6
0
16 Aug 2023
PentestGPT: An LLM-empowered Automatic Penetration Testing Tool
Gelei Deng
Yi Liu
Víctor Mayoral-Vilches
Peng Liu
Yuekang Li
Yuan Xu
Tianwei Zhang
Yang Liu
M. Pinzger
Stefan Rass
LLMAG
20
80
0
13 Aug 2023
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Jen-tse Huang
Man Ho Adrian Lam
E. Li
Shujie Ren
Wenxuan Wang
Wenxiang Jiao
Zhaopeng Tu
Michael R. Lyu
43
40
0
07 Aug 2023
PromptCARE: Prompt Copyright Protection by Watermark Injection and Verification
Hongwei Yao
Jian Lou
Kui Ren
Zhan Qin
AAML
VLM
27
25
0
05 Aug 2023
From Prompt Injections to SQL Injection Attacks: How Protected is Your LLM-Integrated Web Application?
Rodrigo Pedro
Daniel Castro
Paulo Carreira
Nuno Santos
SILM
AAML
36
50
0
03 Aug 2023
Is GPT-4 a reliable rater? Evaluating Consistency in GPT-4 Text Ratings
Veronika Hackl
Alexandra Elena Müller
Michael Granitzer
Maximilian Sailer
ALM
13
43
0
03 Aug 2023
Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection
Jun Yan
Vikas Yadav
Shiyang Li
Lichang Chen
Zheng Tang
Hai Wang
Vijay Srinivasan
Xiang Ren
Hongxia Jin
SILM
20
75
0
31 Jul 2023
Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal Language Models
Erfan Shayegani
Yue Dong
Nael B. Abu-Ghazaleh
30
126
0
26 Jul 2023
MasterKey: Automated Jailbreak Across Multiple Large Language Model Chatbots
Gelei Deng
Yi Liu
Yuekang Li
Kailong Wang
Ying Zhang
Zefeng Li
Haoyu Wang
Tianwei Zhang
Yang Liu
SILM
35
118
0
16 Jul 2023
Effective Prompt Extraction from Language Models
Yiming Zhang
Nicholas Carlini
Daphne Ippolito
MIACV
SILM
30
35
0
13 Jul 2023
Large Language Models for Supply Chain Optimization
Beibin Li
Konstantina Mellou
Bo-qing Zhang
Jeevan Pathuri
Ishai Menache
18
43
0
08 Jul 2023
PromptRobust: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts
Kaijie Zhu
Jindong Wang
Jiaheng Zhou
Zichen Wang
Hao Chen
...
Linyi Yang
Weirong Ye
Yue Zhang
Neil Zhenqiang Gong
Xingxu Xie
SILM
36
146
0
07 Jun 2023
Enhancing Large Language Models Against Inductive Instructions with Dual-critique Prompting
Rui Wang
Hongru Wang
Fei Mi
Yi Chen
Boyang Xue
Kam-Fai Wong
Rui-Lan Xu
29
13
0
23 May 2023
Emergent autonomous scientific research capabilities of large language models
Daniil A. Boiko
R. MacKnight
Gabe Gomes
ELM
LM&Ro
AI4CE
LLMAG
104
117
0
11 Apr 2023
Generative Agents: Interactive Simulacra of Human Behavior
J. Park
Joseph C. O'Brien
Carrie J. Cai
Meredith Ringel Morris
Percy Liang
Michael S. Bernstein
LM&Ro
AI4CE
232
1,734
0
07 Apr 2023
SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models
Potsawee Manakul
Adian Liusie
Mark J. F. Gales
HILM
LRM
150
391
0
15 Mar 2023
Attacks in Adversarial Machine Learning: A Systematic Survey from the Life-cycle Perspective
Baoyuan Wu
Zihao Zhu
Li Liu
Qingshan Liu
Zhaofeng He
Siwei Lyu
AAML
44
21
0
19 Feb 2023
ReAct: Synergizing Reasoning and Acting in Language Models
Shunyu Yao
Jeffrey Zhao
Dian Yu
Nan Du
Izhak Shafran
Karthik Narasimhan
Yuan Cao
LLMAG
ReLM
LRM
235
2,479
0
06 Oct 2022
Previous
1
2
3
4
5