Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2310.18603
Cited By
Large Language Models Are Better Adversaries: Exploring Generative Clean-Label Backdoor Attacks Against Text Classifiers
28 October 2023
Wencong You
Zayd Hammoudeh
Daniel Lowd
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Large Language Models Are Better Adversaries: Exploring Generative Clean-Label Backdoor Attacks Against Text Classifiers"
4 / 4 papers shown
Title
Poisoning Language Models During Instruction Tuning
Alexander Wan
Eric Wallace
Sheng Shen
Dan Klein
SILM
92
185
0
01 May 2023
Large Language Models are Zero-Shot Reasoners
Takeshi Kojima
S. Gu
Machel Reid
Yutaka Matsuo
Yusuke Iwasawa
ReLM
LRM
307
4,084
0
24 May 2022
Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer
Fanchao Qi
Yangyi Chen
Xurui Zhang
Mukai Li
Zhiyuan Liu
Maosong Sun
AAML
SILM
77
175
0
14 Oct 2021
Mitigating backdoor attacks in LSTM-based Text Classification Systems by Backdoor Keyword Identification
Chuanshuai Chen
Jiazhu Dai
SILM
55
126
0
11 Jul 2020
1