ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.12654
  4. Cited By
BackdoorBench: A Comprehensive Benchmark of Backdoor Learning

BackdoorBench: A Comprehensive Benchmark of Backdoor Learning

25 June 2022
Baoyuan Wu
Hongrui Chen
Mingda Zhang
Zihao Zhu
Shaokui Wei
Danni Yuan
Chaoxiao Shen
    ELM
    AAML
ArXivPDFHTML

Papers citing "BackdoorBench: A Comprehensive Benchmark of Backdoor Learning"

22 / 22 papers shown
Title
MergeGuard: Efficient Thwarting of Trojan Attacks in Machine Learning Models
MergeGuard: Efficient Thwarting of Trojan Attacks in Machine Learning Models
Soheil Zibakhsh Shabgahi
Yaman Jandali
F. Koushanfar
MoMe
AAML
54
0
0
06 May 2025
BackdoorDM: A Comprehensive Benchmark for Backdoor Learning in Diffusion Model
BackdoorDM: A Comprehensive Benchmark for Backdoor Learning in Diffusion Model
Weilin Lin
Nanjun Zhou
Y. Wang
Jianze Li
Hui Xiong
Li Liu
AAML
DiffM
164
0
0
17 Feb 2025
BackdoorMBTI: A Backdoor Learning Multimodal Benchmark Tool Kit for Backdoor Defense Evaluation
Haiyang Yu
Tian Xie
Jiaping Gui
Pengyang Wang
P. Yi
Yue Wu
50
1
0
17 Nov 2024
Mitigating the Backdoor Effect for Multi-Task Model Merging via Safety-Aware Subspace
Mitigating the Backdoor Effect for Multi-Task Model Merging via Safety-Aware Subspace
Jinluan Yang
A. Tang
Didi Zhu
Zhengyu Chen
Li Shen
Fei Wu
MoMe
AAML
52
3
0
17 Oct 2024
Uncovering, Explaining, and Mitigating the Superficial Safety of
  Backdoor Defense
Uncovering, Explaining, and Mitigating the Superficial Safety of Backdoor Defense
Rui Min
Zeyu Qin
Nevin L. Zhang
Li Shen
Minhao Cheng
AAML
31
4
0
13 Oct 2024
PAD-FT: A Lightweight Defense for Backdoor Attacks via Data Purification
  and Fine-Tuning
PAD-FT: A Lightweight Defense for Backdoor Attacks via Data Purification and Fine-Tuning
Yukai Xu
Yujie Gu
Kouichi Sakurai
AAML
23
0
0
18 Sep 2024
Towards Robust Physical-world Backdoor Attacks on Lane Detection
Towards Robust Physical-world Backdoor Attacks on Lane Detection
Xinwei Zhang
Aishan Liu
Tianyuan Zhang
Siyuan Liang
Xianglong Liu
AAML
47
10
0
09 May 2024
VL-Trojan: Multimodal Instruction Backdoor Attacks against
  Autoregressive Visual Language Models
VL-Trojan: Multimodal Instruction Backdoor Attacks against Autoregressive Visual Language Models
Jiawei Liang
Siyuan Liang
Man Luo
Aishan Liu
Dongchen Han
Ee-Chien Chang
Xiaochun Cao
38
37
0
21 Feb 2024
Acquiring Clean Language Models from Backdoor Poisoned Datasets by
  Downscaling Frequency Space
Acquiring Clean Language Models from Backdoor Poisoned Datasets by Downscaling Frequency Space
Zongru Wu
Zhuosheng Zhang
Pengzhou Cheng
Gongshen Liu
AAML
44
4
0
19 Feb 2024
Black-Box Access is Insufficient for Rigorous AI Audits
Black-Box Access is Insufficient for Rigorous AI Audits
Stephen Casper
Carson Ezell
Charlotte Siegmann
Noam Kolt
Taylor Lynn Curtis
...
Michael Gerovitch
David Bau
Max Tegmark
David M. Krueger
Dylan Hadfield-Menell
AAML
20
76
0
25 Jan 2024
Benchmarks for Detecting Measurement Tampering
Benchmarks for Detecting Measurement Tampering
Fabien Roger
Ryan Greenblatt
Max Nadeau
Buck Shlegeris
Nate Thomas
28
2
0
29 Aug 2023
A Comprehensive Study on the Robustness of Image Classification and
  Object Detection in Remote Sensing: Surveying and Benchmarking
A Comprehensive Study on the Robustness of Image Classification and Object Detection in Remote Sensing: Surveying and Benchmarking
Shaohui Mei
Jiawei Lian
Xiaofei Wang
Yuru Su
Mingyang Ma
Lap-Pui Chau
AAML
20
11
0
21 Jun 2023
Black-box Backdoor Defense via Zero-shot Image Purification
Black-box Backdoor Defense via Zero-shot Image Purification
Yucheng Shi
Mengnan Du
Xuansheng Wu
Zihan Guan
Jin Sun
Ninghao Liu
38
27
0
21 Mar 2023
AdaptGuard: Defending Against Universal Attacks for Model Adaptation
AdaptGuard: Defending Against Universal Attacks for Model Adaptation
Lijun Sheng
Jian Liang
R. He
Zilei Wang
Tien-Ping Tan
AAML
40
5
0
19 Mar 2023
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive
  Learning
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning
Hritik Bansal
Nishad Singhi
Yu Yang
Fan Yin
Aditya Grover
Kai-Wei Chang
AAML
29
42
0
06 Mar 2023
Attacks in Adversarial Machine Learning: A Systematic Survey from the
  Life-cycle Perspective
Attacks in Adversarial Machine Learning: A Systematic Survey from the Life-cycle Perspective
Baoyuan Wu
Zihao Zhu
Li Liu
Qingshan Liu
Zhaofeng He
Siwei Lyu
AAML
44
21
0
19 Feb 2023
Revisiting Personalized Federated Learning: Robustness Against Backdoor
  Attacks
Revisiting Personalized Federated Learning: Robustness Against Backdoor Attacks
Zeyu Qin
Liuyi Yao
Daoyuan Chen
Yaliang Li
Bolin Ding
Minhao Cheng
FedML
33
25
0
03 Feb 2023
Fine-Tuning Is All You Need to Mitigate Backdoor Attacks
Fine-Tuning Is All You Need to Mitigate Backdoor Attacks
Zeyang Sha
Xinlei He
Pascal Berrang
Mathias Humbert
Yang Zhang
AAML
13
33
0
18 Dec 2022
Selective Amnesia: On Efficient, High-Fidelity and Blind Suppression of
  Backdoor Effects in Trojaned Machine Learning Models
Selective Amnesia: On Efficient, High-Fidelity and Blind Suppression of Backdoor Effects in Trojaned Machine Learning Models
Rui Zhu
Di Tang
Siyuan Tang
XiaoFeng Wang
Haixu Tang
AAML
FedML
29
13
0
09 Dec 2022
RobustBench: a standardized adversarial robustness benchmark
RobustBench: a standardized adversarial robustness benchmark
Francesco Croce
Maksym Andriushchenko
Vikash Sehwag
Edoardo Debenedetti
Nicolas Flammarion
M. Chiang
Prateek Mittal
Matthias Hein
VLM
219
676
0
19 Oct 2020
Clean-Label Backdoor Attacks on Video Recognition Models
Clean-Label Backdoor Attacks on Video Recognition Models
Shihao Zhao
Xingjun Ma
Xiang Zheng
James Bailey
Jingjing Chen
Yu-Gang Jiang
AAML
188
252
0
06 Mar 2020
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
275
5,833
0
08 Jul 2016
1