Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2401.05949
Cited By
Universal Vulnerabilities in Large Language Models: Backdoor Attacks for In-context Learning
11 January 2024
Shuai Zhao
Meihuizi Jia
Anh Tuan Luu
Fengjun Pan
Jinming Wen
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Universal Vulnerabilities in Large Language Models: Backdoor Attacks for In-context Learning"
8 / 8 papers shown
Title
Attention Tracker: Detecting Prompt Injection Attacks in LLMs
Kuo-Han Hung
Ching-Yun Ko
Ambrish Rawat
I-Hsin Chung
Winston H. Hsu
Pin-Yu Chen
36
7
0
01 Nov 2024
Krait: A Backdoor Attack Against Graph Prompt Tuning
Ying Song
Rita Singh
Balaji Palanisamy
AAML
28
0
0
18 Jul 2024
Prompt as Triggers for Backdoor Attack: Examining the Vulnerability in Language Models
Shuai Zhao
Jinming Wen
Anh Tuan Luu
J. Zhao
Jie Fu
SILM
51
88
0
02 May 2023
Poisoning Language Models During Instruction Tuning
Alexander Wan
Eric Wallace
Sheng Shen
Dan Klein
SILM
90
124
0
01 May 2023
TrojText: Test-time Invisible Textual Trojan Insertion
Qiang Lou
Ye Liu
Bo Feng
24
21
0
03 Mar 2023
Instruction Induction: From Few Examples to Natural Language Task Descriptions
Or Honovich
Uri Shaham
Samuel R. Bowman
Omer Levy
ELM
LRM
105
133
0
22 May 2022
Improving Neural Cross-Lingual Summarization via Employing Optimal Transport Distance for Knowledge Distillation
Thong Nguyen
A. Luu
48
39
0
07 Dec 2021
The Pile: An 800GB Dataset of Diverse Text for Language Modeling
Leo Gao
Stella Biderman
Sid Black
Laurence Golding
Travis Hoppe
...
Horace He
Anish Thite
Noa Nabeshima
Shawn Presser
Connor Leahy
AIMat
236
1,508
0
31 Dec 2020
1