ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.05499
  4. Cited By
Prompt Injection attack against LLM-integrated Applications

Prompt Injection attack against LLM-integrated Applications

8 June 2023
Yi Liu
Gelei Deng
Yuekang Li
Kailong Wang
Zihao Wang
XiaoFeng Wang
Tianwei Zhang
Yepang Liu
Haoyu Wang
Yanhong Zheng
Yang Liu
    SILM
ArXivPDFHTML

Papers citing "Prompt Injection attack against LLM-integrated Applications"

50 / 220 papers shown
Title
Do LLMs dream of elephants (when told not to)? Latent concept
  association and associative memory in transformers
Do LLMs dream of elephants (when told not to)? Latent concept association and associative memory in transformers
Yibo Jiang
Goutham Rajendran
Pradeep Ravikumar
Bryon Aragam
CLL
KELM
34
6
0
26 Jun 2024
Adversarial Search Engine Optimization for Large Language Models
Adversarial Search Engine Optimization for Large Language Models
Fredrik Nestaas
Edoardo Debenedetti
Florian Tramèr
AAML
38
4
0
26 Jun 2024
AgentDojo: A Dynamic Environment to Evaluate Attacks and Defenses for
  LLM Agents
AgentDojo: A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents
Edoardo Debenedetti
Jie Zhang
Mislav Balunović
Luca Beurer-Kellner
Marc Fischer
Florian Tramèr
LLMAG
AAML
48
25
1
19 Jun 2024
Adversarial Attacks on Large Language Models in Medicine
Adversarial Attacks on Large Language Models in Medicine
Yifan Yang
Qiao Jin
Furong Huang
Zhiyong Lu
AAML
34
4
0
18 Jun 2024
Self and Cross-Model Distillation for LLMs: Effective Methods for
  Refusal Pattern Alignment
Self and Cross-Model Distillation for LLMs: Effective Methods for Refusal Pattern Alignment
Jie Li
Yi Liu
Chongyang Liu
Xiaoning Ren
Ling Shi
Weisong Sun
Yinxing Xue
32
0
0
17 Jun 2024
Threat Modelling and Risk Analysis for Large Language Model
  (LLM)-Powered Applications
Threat Modelling and Risk Analysis for Large Language Model (LLM)-Powered Applications
Stephen Burabari Tete
34
6
0
16 Jun 2024
TorchOpera: A Compound AI System for LLM Safety
TorchOpera: A Compound AI System for LLM Safety
Shanshan Han
Yuhang Yao
Zijian Hu
Dimitris Stripelis
Zhaozhuo Xu
Chaoyang He
LLMAG
36
0
0
16 Jun 2024
Large Language Models as Software Components: A Taxonomy for
  LLM-Integrated Applications
Large Language Models as Software Components: A Taxonomy for LLM-Integrated Applications
Irene Weber
35
7
0
13 Jun 2024
Unique Security and Privacy Threats of Large Language Model: A
  Comprehensive Survey
Unique Security and Privacy Threats of Large Language Model: A Comprehensive Survey
Shang Wang
Tianqing Zhu
Bo Liu
Ming Ding
Xu Guo
Dayong Ye
Wanlei Zhou
Philip S. Yu
PILM
62
17
0
12 Jun 2024
Dataset and Lessons Learned from the 2024 SaTML LLM Capture-the-Flag
  Competition
Dataset and Lessons Learned from the 2024 SaTML LLM Capture-the-Flag Competition
Edoardo Debenedetti
Javier Rando
Daniel Paleka
Silaghi Fineas Florin
Dragos Albastroiu
...
Stefan Kraft
Mario Fritz
Florian Tramèr
Sahar Abdelnabi
Lea Schonherr
51
9
0
12 Jun 2024
Raccoon: Prompt Extraction Benchmark of LLM-Integrated Applications
Raccoon: Prompt Extraction Benchmark of LLM-Integrated Applications
Junlin Wang
Tianyi Yang
Roy Xie
Bhuwan Dhingra
SILM
AAML
34
3
0
10 Jun 2024
SelfDefend: LLMs Can Defend Themselves against Jailbreaking in a Practical Manner
SelfDefend: LLMs Can Defend Themselves against Jailbreaking in a Practical Manner
Xunguang Wang
Daoyuan Wu
Zhenlan Ji
Zongjie Li
Pingchuan Ma
Shuai Wang
Yingjiu Li
Yang Liu
Ning Liu
Juergen Rahmel
AAML
73
8
0
08 Jun 2024
More Victories, Less Cooperation: Assessing Cicero's Diplomacy Play
More Victories, Less Cooperation: Assessing Cicero's Diplomacy Play
Wichayaporn Wongkamjan
Feng Gu
Yanze Wang
Ulf Hermjakob
Jonathan May
Brandon M. Stewart
Jonathan K. Kummerfeld
Denis Peskoff
Jordan L. Boyd-Graber
45
3
0
07 Jun 2024
A Survey of Language-Based Communication in Robotics
A Survey of Language-Based Communication in Robotics
William Hunt
Sarvapali D. Ramchurn
Mohammad D. Soorati
LM&Ro
51
12
0
06 Jun 2024
Measure-Observe-Remeasure: An Interactive Paradigm for
  Differentially-Private Exploratory Analysis
Measure-Observe-Remeasure: An Interactive Paradigm for Differentially-Private Exploratory Analysis
Priyanka Nanayakkara
Hyeok Kim
Yifan Wu
Ali Sarvghad
Narges Mahyar
G. Miklau
Jessica Hullman
31
17
0
04 Jun 2024
AI Agents Under Threat: A Survey of Key Security Challenges and Future
  Pathways
AI Agents Under Threat: A Survey of Key Security Challenges and Future Pathways
Zehang Deng
Yongjian Guo
Changzhou Han
Wanlun Ma
Junwu Xiong
Sheng Wen
Yang Xiang
44
23
0
04 Jun 2024
Safeguarding Large Language Models: A Survey
Safeguarding Large Language Models: A Survey
Yi Dong
Ronghui Mu
Yanghao Zhang
Siqi Sun
Tianle Zhang
...
Yi Qi
Jinwei Hu
Jie Meng
Saddek Bensalem
Xiaowei Huang
OffRL
KELM
AILaw
35
17
0
03 Jun 2024
PrivacyRestore: Privacy-Preserving Inference in Large Language Models
  via Privacy Removal and Restoration
PrivacyRestore: Privacy-Preserving Inference in Large Language Models via Privacy Removal and Restoration
Ziqian Zeng
Jianwei Wang
Zhengdong Lu
Huiping Zhuang
Cen Chen
RALM
KELM
37
7
0
03 Jun 2024
Privacy in LLM-based Recommendation: Recent Advances and Future
  Directions
Privacy in LLM-based Recommendation: Recent Advances and Future Directions
Sichun Luo
Wei Shao
Yuxuan Yao
Jian Xu
Mingyang Liu
...
Maolin Wang
Guanzhi Deng
Hanxu Hou
Xinyi Zhang
Linqi Song
26
1
0
03 Jun 2024
BadRAG: Identifying Vulnerabilities in Retrieval Augmented Generation of
  Large Language Models
BadRAG: Identifying Vulnerabilities in Retrieval Augmented Generation of Large Language Models
Jiaqi Xue
Meng Zheng
Yebowen Hu
Fei Liu
Xun Chen
Qian Lou
AAML
SILM
24
25
0
03 Jun 2024
Exfiltration of personal information from ChatGPT via prompt injection
Exfiltration of personal information from ChatGPT via prompt injection
Gregory Schwartzman
SILM
19
1
0
31 May 2024
Enhancing Jailbreak Attack Against Large Language Models through Silent
  Tokens
Enhancing Jailbreak Attack Against Large Language Models through Silent Tokens
Jiahao Yu
Haozheng Luo
Jerry Yao-Chieh Hu
Wenbo Guo
Han Liu
Xinyu Xing
38
18
0
31 May 2024
AI Risk Management Should Incorporate Both Safety and Security
AI Risk Management Should Incorporate Both Safety and Security
Xiangyu Qi
Yangsibo Huang
Yi Zeng
Edoardo Debenedetti
Jonas Geiping
...
Chaowei Xiao
Bo-wen Li
Dawn Song
Peter Henderson
Prateek Mittal
AAML
45
10
0
29 May 2024
Semantic-guided Prompt Organization for Universal Goal Hijacking against
  LLMs
Semantic-guided Prompt Organization for Universal Goal Hijacking against LLMs
Yihao Huang
Chong Wang
Xiaojun Jia
Qing-Wu Guo
Felix Juefei Xu
Jian Zhang
G. Pu
Yang Liu
30
8
0
23 May 2024
Lockpicking LLMs: A Logit-Based Jailbreak Using Token-level Manipulation
Lockpicking LLMs: A Logit-Based Jailbreak Using Token-level Manipulation
Yuxi Li
Yi Liu
Yuekang Li
Ling Shi
Gelei Deng
Shengquan Chen
Kailong Wang
38
12
0
20 May 2024
Sociotechnical Implications of Generative Artificial Intelligence for
  Information Access
Sociotechnical Implications of Generative Artificial Intelligence for Information Access
Bhaskar Mitra
Henriette Cramer
Olya Gurevich
42
2
0
19 May 2024
Safeguarding Vision-Language Models Against Patched Visual Prompt
  Injectors
Safeguarding Vision-Language Models Against Patched Visual Prompt Injectors
Jiachen Sun
Changsheng Wang
Jiong Wang
Yiwei Zhang
Chaowei Xiao
AAML
VLM
34
2
0
17 May 2024
What is it for a Machine Learning Model to Have a Capability?
What is it for a Machine Learning Model to Have a Capability?
Jacqueline Harding
Nathaniel Sharadin
ELM
36
3
0
14 May 2024
PARDEN, Can You Repeat That? Defending against Jailbreaks via Repetition
PARDEN, Can You Repeat That? Defending against Jailbreaks via Repetition
Ziyang Zhang
Qizhen Zhang
Jakob N. Foerster
AAML
35
18
0
13 May 2024
Risks of Practicing Large Language Models in Smart Grid: Threat Modeling and Validation
Risks of Practicing Large Language Models in Smart Grid: Threat Modeling and Validation
Jiangnan Li
Yingyuan Yang
Jinyuan Stella Sun
57
4
0
10 May 2024
Large Language Models for Cyber Security: A Systematic Literature Review
Large Language Models for Cyber Security: A Systematic Literature Review
HanXiang Xu
Shenao Wang
Ningke Li
K. Wang
Yanjie Zhao
Kai Chen
Ting Yu
Yang Janet Liu
H. Wang
34
23
0
08 May 2024
Human-Imperceptible Retrieval Poisoning Attacks in LLM-Powered
  Applications
Human-Imperceptible Retrieval Poisoning Attacks in LLM-Powered Applications
Quan Zhang
Binqi Zeng
Chijin Zhou
Gwihwan Go
Heyuan Shi
Yu Jiang
SILM
AAML
32
19
0
26 Apr 2024
Attacks on Third-Party APIs of Large Language Models
Attacks on Third-Party APIs of Large Language Models
Wanru Zhao
Vidit Khazanchi
Haodi Xing
Xuanli He
Qiongkai Xu
Nicholas D. Lane
24
6
0
24 Apr 2024
The Instruction Hierarchy: Training LLMs to Prioritize Privileged
  Instructions
The Instruction Hierarchy: Training LLMs to Prioritize Privileged Instructions
Eric Wallace
Kai Y. Xiao
R. Leike
Lilian Weng
Johannes Heidecke
Alex Beutel
SILM
47
115
0
19 Apr 2024
LLMs for Cyber Security: New Opportunities
LLMs for Cyber Security: New Opportunities
D. Divakaran
Sai Teja Peddinti
22
11
0
17 Apr 2024
Glitch Tokens in Large Language Models: Categorization Taxonomy and
  Effective Detection
Glitch Tokens in Large Language Models: Categorization Taxonomy and Effective Detection
Yuxi Li
Yi Liu
Gelei Deng
Ying Zhang
Wenjia Song
Ling Shi
Kailong Wang
Yuekang Li
Yang Liu
Haoyu Wang
45
20
0
15 Apr 2024
GoEX: Perspectives and Designs Towards a Runtime for Autonomous LLM
  Applications
GoEX: Perspectives and Designs Towards a Runtime for Autonomous LLM Applications
Shishir G. Patil
Tianjun Zhang
Vivian Fang
Noppapon C Roy Huang
Uc Berkeley
Aaron Hao
Martin Casado
Joseph E. Gonzalez Raluca
Ada Popa
Ion Stoica
ALM
29
9
0
10 Apr 2024
CodecLM: Aligning Language Models with Tailored Synthetic Data
CodecLM: Aligning Language Models with Tailored Synthetic Data
Zifeng Wang
Chun-Liang Li
Vincent Perot
Long T. Le
Jin Miao
Zizhao Zhang
Chen-Yu Lee
Tomas Pfister
SyDa
ALM
23
17
0
08 Apr 2024
Goal-guided Generative Prompt Injection Attack on Large Language Models
Goal-guided Generative Prompt Injection Attack on Large Language Models
Chong Zhang
Mingyu Jin
Qinkai Yu
Chengzhi Liu
Haochen Xue
Xiaobo Jin
AAML
SILM
34
9
0
06 Apr 2024
Octopus v2: On-device language model for super agent
Octopus v2: On-device language model for super agent
Wei Chen
Zhiyuan Li
RALM
32
23
0
02 Apr 2024
Can LLMs get help from other LLMs without revealing private information?
Can LLMs get help from other LLMs without revealing private information?
Florian Hartmann
D. Tran
Peter Kairouz
Victor Carbune
Blaise Agüera y Arcas
21
5
0
01 Apr 2024
Exploring the Privacy Protection Capabilities of Chinese Large Language
  Models
Exploring the Privacy Protection Capabilities of Chinese Large Language Models
Yuqi Yang
Xiaowen Huang
Jitao Sang
ELM
PILM
AILaw
41
1
0
27 Mar 2024
BadEdit: Backdooring large language models by model editing
BadEdit: Backdooring large language models by model editing
Yanzhou Li
Tianlin Li
Kangjie Chen
Jian Zhang
Shangqing Liu
Wenhan Wang
Tianwei Zhang
Yang Liu
SyDa
AAML
KELM
56
50
0
20 Mar 2024
Securing Large Language Models: Threats, Vulnerabilities and Responsible
  Practices
Securing Large Language Models: Threats, Vulnerabilities and Responsible Practices
Sara Abdali
Richard Anarfi
C. Barberan
Jia He
PILM
65
24
0
19 Mar 2024
Large language models in 6G security: challenges and opportunities
Large language models in 6G security: challenges and opportunities
Tri Nguyen
Huong Nguyen
Ahmad Ijaz
Saeid Sheikhi
Athanasios V. Vasilakos
Panos Kostakos
ELM
22
7
0
18 Mar 2024
Logits of API-Protected LLMs Leak Proprietary Information
Logits of API-Protected LLMs Leak Proprietary Information
Matthew Finlayson
Xiang Ren
Swabha Swayamdipta
PILM
29
21
0
14 Mar 2024
Strengthening Multimodal Large Language Model with Bootstrapped
  Preference Optimization
Strengthening Multimodal Large Language Model with Bootstrapped Preference Optimization
Renjie Pi
Tianyang Han
Wei Xiong
Jipeng Zhang
Runtao Liu
Rui Pan
Tong Zhang
MLLM
35
33
0
13 Mar 2024
Knowledge Conflicts for LLMs: A Survey
Knowledge Conflicts for LLMs: A Survey
Rongwu Xu
Zehan Qi
Zhijiang Guo
Cunxiang Wang
Hongru Wang
Yue Zhang
Wei Xu
198
92
0
13 Mar 2024
Automatic and Universal Prompt Injection Attacks against Large Language
  Models
Automatic and Universal Prompt Injection Attacks against Large Language Models
Xiaogeng Liu
Zhiyuan Yu
Yizhe Zhang
Ning Zhang
Chaowei Xiao
SILM
AAML
38
33
0
07 Mar 2024
Neural Exec: Learning (and Learning from) Execution Triggers for Prompt
  Injection Attacks
Neural Exec: Learning (and Learning from) Execution Triggers for Prompt Injection Attacks
Dario Pasquini
Martin Strohmeier
Carmela Troncoso
AAML
26
21
0
06 Mar 2024
Previous
12345
Next