Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2408.12798
Cited By
BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks on Large Language Models
23 August 2024
Yige Li
Hanxun Huang
Yunhan Zhao
Xingjun Ma
Jun Sun
AAML
SILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks on Large Language Models"
13 / 13 papers shown
Title
BadVideo: Stealthy Backdoor Attack against Text-to-Video Generation
Ruotong Wang
Mingli Zhu
Jiarong Ou
R. J. Chen
Xin Tao
Pengfei Wan
Baoyuan Wu
DiffM
AAML
VGen
45
0
0
23 Apr 2025
Propaganda via AI? A Study on Semantic Backdoors in Large Language Models
Nay Myat Min
Long H. Pham
Yige Li
Jun Sun
AAML
18
0
0
15 Apr 2025
Parasite: A Steganography-based Backdoor Attack Framework for Diffusion Models
J. Chen
Yu Pan
Yi Du
Chunkai Wu
Lin Wang
DiffM
35
0
0
08 Apr 2025
Class-Conditional Neural Polarizer: A Lightweight and Effective Backdoor Defense by Purifying Poisoned Features
Mingli Zhu
Shaokui Wei
Hongyuan Zha
Baoyuan Wu
AAML
32
0
0
23 Feb 2025
BackdoorDM: A Comprehensive Benchmark for Backdoor Learning in Diffusion Model
Weilin Lin
Nanjun Zhou
Y. Wang
Jianze Li
Hui Xiong
Li Liu
AAML
DiffM
86
0
0
17 Feb 2025
Concept-ROT: Poisoning Concepts in Large Language Models with Model Editing
Keltin Grimes
Marco Christiani
David Shriver
Marissa Connor
KELM
80
1
0
17 Dec 2024
CROW: Eliminating Backdoors from Large Language Models via Internal Consistency Regularization
Nay Myat Min
Long H. Pham
Yige Li
Jun Sun
AAML
64
3
0
18 Nov 2024
Unlearning as multi-task optimization: A normalized gradient difference approach with an adaptive learning rate
Zhiqi Bu
Xiaomeng Jin
Bhanukiran Vinzamuri
Anil Ramakrishna
Kai-Wei Chang
V. Cevher
Mingyi Hong
MU
83
6
0
29 Oct 2024
AdvBDGen: Adversarially Fortified Prompt-Specific Fuzzy Backdoor Generator Against LLM Alignment
Pankayaraj Pathmanathan
Udari Madhushani Sehwag
Michael-Andrei Panaitescu-Liess
Furong Huang
SILM
AAML
38
0
0
15 Oct 2024
SplitLLM: Collaborative Inference of LLMs for Model Placement and Throughput Optimization
Akrit Mudvari
Yuang Jiang
Leandros Tassiulas
25
0
0
14 Oct 2024
PoisonBench: Assessing Large Language Model Vulnerability to Data Poisoning
Tingchen Fu
Mrinank Sharma
Philip H. S. Torr
Shay B. Cohen
David M. Krueger
Fazl Barez
AAML
39
7
0
11 Oct 2024
Mitigating Backdoor Threats to Large Language Models: Advancement and Challenges
Qin Liu
Wenjie Mo
Terry Tong
Jiashu Xu
Fei Wang
Chaowei Xiao
Muhao Chen
AAML
31
4
0
30 Sep 2024
Harmful Fine-tuning Attacks and Defenses for Large Language Models: A Survey
Tiansheng Huang
Sihao Hu
Fatih Ilhan
Selim Furkan Tekin
Ling Liu
AAML
38
21
0
26 Sep 2024
1