Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2312.11057
Cited By
DataElixir: Purifying Poisoned Dataset to Mitigate Backdoor Attacks via Diffusion Models
18 December 2023
Jiachen Zhou
Peizhuo Lv
Yibing Lan
Guozhu Meng
Kai Chen
Hualong Ma
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"DataElixir: Purifying Poisoned Dataset to Mitigate Backdoor Attacks via Diffusion Models"
5 / 5 papers shown
Title
Backdoor Defense in Diffusion Models via Spatial Attention Unlearning
Abha Jha
Ashwath Vaithinathan Aravindan
Matthew Salaway
Atharva Sandeep Bhide
Duygu Nur Yaldiz
AAML
70
0
0
21 Apr 2025
REFINE: Inversion-Free Backdoor Defense via Model Reprogramming
Y. Chen
Shuo Shao
Enhao Huang
Yiming Li
Pin-Yu Chen
Zhanyue Qin
Kui Ren
AAML
52
3
0
22 Feb 2025
BackdoorBench: A Comprehensive Benchmark and Analysis of Backdoor Learning
Baoyuan Wu
Hongrui Chen
Mingda Zhang
Zihao Zhu
Shaokui Wei
Danni Yuan
Mingli Zhu
Ruotong Wang
Li Liu
Chaoxiao Shen
AAML
ELM
78
9
0
26 Jan 2024
Unlearnable Examples Give a False Sense of Security: Piercing through Unexploitable Data with Learnable Examples
Wanzhu Jiang
Yunfeng Diao
He-Nan Wang
Jianxin Sun
Hao Wu
Richang Hong
37
18
0
16 May 2023
DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection
Yuanchun Li
Jiayi Hua
Haoyu Wang
Chunyang Chen
Yunxin Liu
FedML
SILM
86
75
0
18 Jan 2021
1