ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.05499
  4. Cited By
Prompt Injection attack against LLM-integrated Applications

Prompt Injection attack against LLM-integrated Applications

8 June 2023
Yi Liu
Gelei Deng
Yuekang Li
Kailong Wang
Zihao Wang
XiaoFeng Wang
Tianwei Zhang
Yepang Liu
Haoyu Wang
Yanhong Zheng
Yang Liu
    SILM
ArXivPDFHTML

Papers citing "Prompt Injection attack against LLM-integrated Applications"

50 / 220 papers shown
Title
Breaking ReAct Agents: Foot-in-the-Door Attack Will Get You In
Breaking ReAct Agents: Foot-in-the-Door Attack Will Get You In
Itay Nakash
George Kour
Guy Uziel
Ateret Anaby-Tavor
AAML
LLMAG
32
4
0
22 Oct 2024
Insights and Current Gaps in Open-Source LLM Vulnerability Scanners: A
  Comparative Analysis
Insights and Current Gaps in Open-Source LLM Vulnerability Scanners: A Comparative Analysis
Jonathan Brokman
Omer Hofman
Oren Rachmil
Inderjeet Singh
Vikas Pahuja
Rathina Sabapathy Aishvariya Priya
Amit Giloni
Roman Vainshtein
Hisashi Kojima
31
2
0
21 Oct 2024
The Best Defense is a Good Offense: Countering LLM-Powered Cyberattacks
The Best Defense is a Good Offense: Countering LLM-Powered Cyberattacks
Daniel Ayzenshteyn
Roy Weiss
Yisroel Mirsky
AAML
31
0
0
20 Oct 2024
Imprompter: Tricking LLM Agents into Improper Tool Use
Imprompter: Tricking LLM Agents into Improper Tool Use
Xiaohan Fu
Shuheng Li
Zihan Wang
Y. Liu
Rajesh K. Gupta
Taylor Berg-Kirkpatrick
Earlence Fernandes
SILM
LLMAG
54
15
0
19 Oct 2024
Cognitive Overload Attack:Prompt Injection for Long Context
Cognitive Overload Attack:Prompt Injection for Long Context
Bibek Upadhayay
Vahid Behzadan
Amin Karbasi
AAML
28
2
0
15 Oct 2024
Survival of the Safest: Towards Secure Prompt Optimization through
  Interleaved Multi-Objective Evolution
Survival of the Safest: Towards Secure Prompt Optimization through Interleaved Multi-Objective Evolution
Ankita Sinha
Wendi Cui
Kamalika Das
Jiaxin Zhang
AAML
28
2
0
12 Oct 2024
DAWN: Designing Distributed Agents in a Worldwide Network
DAWN: Designing Distributed Agents in a Worldwide Network
Zahra Aminiranjbar
Jianan Tang
Qiudan Wang
Shubha Pant
Mahesh Viswanathan
LLMAG
AI4CE
26
2
0
11 Oct 2024
Controllable Safety Alignment: Inference-Time Adaptation to Diverse Safety Requirements
Controllable Safety Alignment: Inference-Time Adaptation to Diverse Safety Requirements
Jingyu Zhang
Ahmed Elgohary
Ahmed Magooda
Daniel Khashabi
Benjamin Van Durme
122
2
0
11 Oct 2024
Prompt Infection: LLM-to-LLM Prompt Injection within Multi-Agent Systems
Prompt Infection: LLM-to-LLM Prompt Injection within Multi-Agent Systems
Donghyun Lee
Mo Tiwari
LLMAG
31
9
0
09 Oct 2024
Bridging Today and the Future of Humanity: AI Safety in 2024 and Beyond
Bridging Today and the Future of Humanity: AI Safety in 2024 and Beyond
Shanshan Han
75
1
0
09 Oct 2024
Instructional Segment Embedding: Improving LLM Safety with Instruction Hierarchy
Instructional Segment Embedding: Improving LLM Safety with Instruction Hierarchy
Tong Wu
Shujian Zhang
Kaiqiang Song
Silei Xu
Sanqiang Zhao
Ravi Agrawal
Sathish Indurthi
Chong Xiang
Prateek Mittal
Wenxuan Zhou
37
7
0
09 Oct 2024
On Instruction-Finetuning Neural Machine Translation Models
On Instruction-Finetuning Neural Machine Translation Models
Vikas Raunak
Roman Grundkiewicz
Marcin Junczys-Dowmunt
26
1
0
07 Oct 2024
Permissive Information-Flow Analysis for Large Language Models
Permissive Information-Flow Analysis for Large Language Models
Shoaib Ahmed Siddiqui
Radhika Gaonkar
Boris Köpf
David M. Krueger
Andrew J. Paverd
Ahmed Salem
Shruti Tople
Lukas Wutschitz
Menglin Xia
Santiago Zanella Béguelin
28
1
0
04 Oct 2024
Agent Security Bench (ASB): Formalizing and Benchmarking Attacks and Defenses in LLM-based Agents
Agent Security Bench (ASB): Formalizing and Benchmarking Attacks and Defenses in LLM-based Agents
Hanrong Zhang
Jingyuan Huang
Kai Mei
Yifei Yao
Zhenting Wang
Chenlu Zhan
Hongwei Wang
Yongfeng Zhang
AAML
LLMAG
ELM
51
18
0
03 Oct 2024
VLMGuard: Defending VLMs against Malicious Prompts via Unlabeled Data
VLMGuard: Defending VLMs against Malicious Prompts via Unlabeled Data
Xuefeng Du
Reshmi Ghosh
Robert Sim
Ahmed Salem
Vitor Carvalho
Emily Lawton
Yixuan Li
Jack W. Stokes
VLM
AAML
38
5
0
01 Oct 2024
System-Level Defense against Indirect Prompt Injection Attacks: An
  Information Flow Control Perspective
System-Level Defense against Indirect Prompt Injection Attacks: An Information Flow Control Perspective
Fangzhou Wu
Ethan Cecchetti
Chaowei Xiao
39
12
0
27 Sep 2024
AI Delegates with a Dual Focus: Ensuring Privacy and Strategic
  Self-Disclosure
AI Delegates with a Dual Focus: Ensuring Privacy and Strategic Self-Disclosure
Xi Chen
Zhiyang Zhang
Fangkai Yang
Xiaoting Qin
Chao Du
...
Hangxin Liu
Qingwei Lin
Saravan Rajmohan
Dongmei Zhang
Qi Zhang
37
1
0
26 Sep 2024
Demystifying Issues, Causes and Solutions in LLM Open-Source Projects
Demystifying Issues, Causes and Solutions in LLM Open-Source Projects
Yangxiao Cai
Peng Liang
Yifei Wang
Zengyang Li
Mojtaba Shahin
40
2
0
25 Sep 2024
Steward: Natural Language Web Automation
Steward: Natural Language Web Automation
Brian Tang
Kang G. Shin
LLMAG
29
1
0
23 Sep 2024
PROMPTFUZZ: Harnessing Fuzzing Techniques for Robust Testing of Prompt Injection in LLMs
PROMPTFUZZ: Harnessing Fuzzing Techniques for Robust Testing of Prompt Injection in LLMs
Jiahao Yu
Yangguang Shao
Hanwen Miao
Junzheng Shi
SILM
AAML
69
4
0
23 Sep 2024
Applying Pre-trained Multilingual BERT in Embeddings for Improved
  Malicious Prompt Injection Attacks Detection
Applying Pre-trained Multilingual BERT in Embeddings for Improved Malicious Prompt Injection Attacks Detection
M. Rahman
Hossain Shahriar
Fan Wu
A. Cuzzocrea
AAML
36
4
0
20 Sep 2024
Multitask Mayhem: Unveiling and Mitigating Safety Gaps in LLMs
  Fine-tuning
Multitask Mayhem: Unveiling and Mitigating Safety Gaps in LLMs Fine-tuning
Essa Jan
Nouar Aldahoul
Moiz Ali
Faizan Ahmad
Fareed Zaffar
Yasir Zaki
21
3
0
18 Sep 2024
LLM-Agent-UMF: LLM-based Agent Unified Modeling Framework for Seamless
  Integration of Multi Active/Passive Core-Agents
LLM-Agent-UMF: LLM-based Agent Unified Modeling Framework for Seamless Integration of Multi Active/Passive Core-Agents
Amine B. Hassouna
Hana Chaari
Ines Belhaj
LLMAG
30
1
0
17 Sep 2024
CoCA: Regaining Safety-awareness of Multimodal Large Language Models
  with Constitutional Calibration
CoCA: Regaining Safety-awareness of Multimodal Large Language Models with Constitutional Calibration
Jiahui Gao
Renjie Pi
Tianyang Han
Han Wu
Lanqing Hong
Lingpeng Kong
Xin Jiang
Zhenguo Li
41
5
0
17 Sep 2024
Prompt Obfuscation for Large Language Models
Prompt Obfuscation for Large Language Models
David Pape
Thorsten Eisenhofer
Thorsten Eisenhofer
Lea Schönherr
AAML
33
2
0
17 Sep 2024
Exploring LLMs for Malware Detection: Review, Framework Design, and
  Countermeasure Approaches
Exploring LLMs for Malware Detection: Review, Framework Design, and Countermeasure Approaches
Jamal N. Al-Karaki
Muhammad Al-Zafar Khan
Marwan Omar
34
6
0
11 Sep 2024
Recent Advances in Attack and Defense Approaches of Large Language
  Models
Recent Advances in Attack and Defense Approaches of Large Language Models
Jing Cui
Yishi Xu
Zhewei Huang
Shuchang Zhou
Jianbin Jiao
Junge Zhang
PILM
AAML
54
1
0
05 Sep 2024
Efficient Detection of Toxic Prompts in Large Language Models
Efficient Detection of Toxic Prompts in Large Language Models
Yi Liu
Junzhe Yu
Huijia Sun
Ling Shi
Gelei Deng
Yuqi Chen
Yang Liu
29
4
0
21 Aug 2024
Hide Your Malicious Goal Into Benign Narratives: Jailbreak Large Language Models through Carrier Articles
Hide Your Malicious Goal Into Benign Narratives: Jailbreak Large Language Models through Carrier Articles
Zhilong Wang
Haizhou Wang
Nanqing Luo
Lan Zhang
Xiaoyan Sun
Yebo Cao
Peng Liu
25
0
0
20 Aug 2024
Enhance Modality Robustness in Text-Centric Multimodal Alignment with
  Adversarial Prompting
Enhance Modality Robustness in Text-Centric Multimodal Alignment with Adversarial Prompting
Yun-Da Tsai
Ting-Yu Yen
Keng-Te Liao
Shou-De Lin
29
1
0
19 Aug 2024
Privacy Checklist: Privacy Violation Detection Grounding on Contextual Integrity Theory
Privacy Checklist: Privacy Violation Detection Grounding on Contextual Integrity Theory
Haoran Li
Wei Fan
Yulin Chen
Jiayang Cheng
Tianshu Chu
Xuebing Zhou
Peizhao Hu
Yangqiu Song
AILaw
43
2
0
19 Aug 2024
Characterizing and Evaluating the Reliability of LLMs against Jailbreak
  Attacks
Characterizing and Evaluating the Reliability of LLMs against Jailbreak Attacks
Kexin Chen
Yi Liu
Dongxia Wang
Jiaying Chen
Wenhai Wang
44
1
0
18 Aug 2024
BaThe: Defense against the Jailbreak Attack in Multimodal Large Language Models by Treating Harmful Instruction as Backdoor Trigger
BaThe: Defense against the Jailbreak Attack in Multimodal Large Language Models by Treating Harmful Instruction as Backdoor Trigger
Yulin Chen
Haoran Li
Zihao Zheng
Zihao Zheng
Yangqiu Song
Bryan Hooi
43
6
0
17 Aug 2024
GlitchProber: Advancing Effective Detection and Mitigation of Glitch
  Tokens in Large Language Models
GlitchProber: Advancing Effective Detection and Mitigation of Glitch Tokens in Large Language Models
Zhibo Zhang
Wuxia Bai
Yuxi Li
M. Meng
K. Wang
Ling Shi
Li Li
Jun Wang
Haoyu Wang
24
4
0
09 Aug 2024
ConfusedPilot: Confused Deputy Risks in RAG-based LLMs
ConfusedPilot: Confused Deputy Risks in RAG-based LLMs
Ayush RoyChowdhury
Mulong Luo
Prateek Sahu
Sarbartha Banerjee
Mohit Tiwari
SILM
43
0
0
09 Aug 2024
The Emerged Security and Privacy of LLM Agent: A Survey with Case
  Studies
The Emerged Security and Privacy of LLM Agent: A Survey with Case Studies
Feng He
Tianqing Zhu
Dayong Ye
Bo Liu
Wanlei Zhou
Philip S. Yu
PILM
LLMAG
ELM
68
23
0
28 Jul 2024
Blockchain for Large Language Model Security and Safety: A Holistic
  Survey
Blockchain for Large Language Model Security and Safety: A Holistic Survey
Caleb Geren
Amanda Board
Gaby G. Dagher
Tim Andersen
Jun Zhuang
46
6
0
26 Jul 2024
Operationalizing a Threat Model for Red-Teaming Large Language Models
  (LLMs)
Operationalizing a Threat Model for Red-Teaming Large Language Models (LLMs)
Apurv Verma
Satyapriya Krishna
Sebastian Gehrmann
Madhavan Seshadri
Anu Pradhan
Tom Ault
Leslie Barrett
David Rabinowitz
John Doucette
Nhathai Phan
49
9
0
20 Jul 2024
Black-Box Opinion Manipulation Attacks to Retrieval-Augmented Generation
  of Large Language Models
Black-Box Opinion Manipulation Attacks to Retrieval-Augmented Generation of Large Language Models
Zhuo Chen
Jiawei Liu
Haotan Liu
Qikai Cheng
Fan Zhang
Wei Lu
Xiaozhong Liu
AAML
34
6
0
18 Jul 2024
Evaluating AI Evaluation: Perils and Prospects
Evaluating AI Evaluation: Perils and Prospects
John Burden
ELM
33
8
0
12 Jul 2024
DeCE: Deceptive Cross-Entropy Loss Designed for Defending Backdoor
  Attacks
DeCE: Deceptive Cross-Entropy Loss Designed for Defending Backdoor Attacks
Guang Yang
Yu Zhou
Xiang Chen
Xiangyu Zhang
Terry Yue Zhuo
David Lo
Taolue Chen
AAML
52
4
0
12 Jul 2024
Systematic Categorization, Construction and Evaluation of New Attacks against Multi-modal Mobile GUI Agents
Systematic Categorization, Construction and Evaluation of New Attacks against Multi-modal Mobile GUI Agents
Yulong Yang
Xinshan Yang
Shuaidong Li
Chenhao Lin
Zhengyu Zhao
Chao Shen
Tianwei Zhang
40
1
0
12 Jul 2024
Multilingual Blending: LLM Safety Alignment Evaluation with Language
  Mixture
Multilingual Blending: LLM Safety Alignment Evaluation with Language Mixture
Jiayang Song
Yuheng Huang
Zhehua Zhou
Lei Ma
37
6
0
10 Jul 2024
Safe-Embed: Unveiling the Safety-Critical Knowledge of Sentence Encoders
Safe-Embed: Unveiling the Safety-Critical Knowledge of Sentence Encoders
Jinseok Kim
Jaewon Jung
Sangyeop Kim
S. Park
Sungzoon Cho
56
0
0
09 Jul 2024
Large Language Model as an Assignment Evaluator: Insights, Feedback, and
  Challenges in a 1000+ Student Course
Large Language Model as an Assignment Evaluator: Insights, Feedback, and Challenges in a 1000+ Student Course
Cheng-Han Chiang
Wei-Chih Chen
Chun-Yi Kuan
Chienchou Yang
Hung-yi Lee
ELM
AI4Ed
41
5
0
07 Jul 2024
AI Safety in Generative AI Large Language Models: A Survey
AI Safety in Generative AI Large Language Models: A Survey
Jaymari Chua
Yun Yvonna Li
Shiyi Yang
Chen Wang
Lina Yao
LM&MA
36
12
0
06 Jul 2024
Soft Begging: Modular and Efficient Shielding of LLMs against Prompt
  Injection and Jailbreaking based on Prompt Tuning
Soft Begging: Modular and Efficient Shielding of LLMs against Prompt Injection and Jailbreaking based on Prompt Tuning
Simon Ostermann
Kevin Baum
Christoph Endres
Julia Masloh
P. Schramowski
AAML
46
1
0
03 Jul 2024
A Survey on Failure Analysis and Fault Injection in AI Systems
A Survey on Failure Analysis and Fault Injection in AI Systems
Guangba Yu
Gou Tan
Haojia Huang
Zhenyu Zhang
Pengfei Chen
Roberto Natella
Zibin Zheng
34
3
0
28 Jun 2024
A Survey on Privacy Attacks Against Digital Twin Systems in AI-Robotics
A Survey on Privacy Attacks Against Digital Twin Systems in AI-Robotics
Ivan A. Fernandez
Subash Neupane
Trisha Chakraborty
Shaswata Mitra
Sudip Mittal
Nisha Pillai
Jingdao Chen
Shahram Rahimi
52
1
0
27 Jun 2024
Psychological Profiling in Cybersecurity: A Look at LLMs and
  Psycholinguistic Features
Psychological Profiling in Cybersecurity: A Look at LLMs and Psycholinguistic Features
Jean Marie Tshimula
D'Jeff K. Nkashama
Jean Tshibangu Muabila
René Manassé Galekwa
Hugues Kanda
...
Belkacem Chikhaoui
Shengrui Wang
Ali Mulenda Sumbu
Xavier Ndona
Raoul Kienge-Kienge Intudi
47
0
0
26 Jun 2024
Previous
12345
Next