Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1905.13409
Cited By
Bypassing Backdoor Detection Algorithms in Deep Learning
31 May 2019
T. Tan
Reza Shokri
FedML
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Bypassing Backdoor Detection Algorithms in Deep Learning"
19 / 19 papers shown
Title
Everyone Can Attack: Repurpose Lossy Compression as a Natural Backdoor Attack
Sze Jue Yang
Q. Nguyen
Chee Seng Chan
Khoa D. Doan
AAML
DiffM
26
0
0
31 Aug 2023
Attacks in Adversarial Machine Learning: A Systematic Survey from the Life-cycle Perspective
Baoyuan Wu
Zihao Zhu
Li Liu
Qingshan Liu
Zhaofeng He
Siwei Lyu
AAML
44
21
0
19 Feb 2023
Backdoor Learning for NLP: Recent Advances, Challenges, and Future Research Directions
Marwan Omar
SILM
AAML
25
20
0
14 Feb 2023
Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural Networks with Neuromorphic Data
Gorka Abad
Oguzhan Ersoy
S. Picek
A. Urbieta
AAML
21
17
0
13 Feb 2023
SoK: A Systematic Evaluation of Backdoor Trigger Characteristics in Image Classification
Gorka Abad
Jing Xu
Stefanos Koffas
Behrad Tajalli
S. Picek
Mauro Conti
AAML
63
5
0
03 Feb 2023
Universal Soldier: Using Universal Adversarial Perturbations for Detecting Backdoor Attacks
Xiaoyun Xu
Oguzhan Ersoy
S. Picek
AAML
21
2
0
01 Feb 2023
Membership Inference Attacks Against Semantic Segmentation Models
Tomás Chobola
Dmitrii Usynin
Georgios Kaissis
MIACV
24
6
0
02 Dec 2022
Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class
Khoa D. Doan
Yingjie Lao
Ping Li
34
40
0
17 Oct 2022
FIBA: Frequency-Injection based Backdoor Attack in Medical Image Analysis
Yu Feng
Benteng Ma
Jing Zhang
Shanshan Zhao
Yong-quan Xia
Dacheng Tao
AAML
31
84
0
02 Dec 2021
Backdoor Attack through Frequency Domain
Tong Wang
Yuan Yao
Feng Xu
Shengwei An
Hanghang Tong
Ting Wang
AAML
20
33
0
22 Nov 2021
Quantization Backdoors to Deep Learning Commercial Frameworks
Hua Ma
Huming Qiu
Yansong Gao
Zhi-Li Zhang
A. Abuadbba
Minhui Xue
Anmin Fu
Jiliang Zhang
S. Al-Sarawi
Derek Abbott
MQ
30
19
0
20 Aug 2021
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning
Jinyuan Jia
Yupei Liu
Neil Zhenqiang Gong
SILM
SSL
24
151
0
01 Aug 2021
SPECTRE: Defending Against Backdoor Attacks Using Robust Statistics
J. Hayase
Weihao Kong
Raghav Somani
Sewoong Oh
AAML
19
149
0
22 Apr 2021
TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural Backdoors
Ren Pang
Zheng-Wei Zhang
Xiangshan Gao
Zhaohan Xi
S. Ji
Peng Cheng
Xiapu Luo
Ting Wang
AAML
27
31
0
16 Dec 2020
Privacy and Robustness in Federated Learning: Attacks and Defenses
Lingjuan Lyu
Han Yu
Xingjun Ma
Chen Chen
Lichao Sun
Jun Zhao
Qiang Yang
Philip S. Yu
FedML
180
355
0
07 Dec 2020
Fine-tuning Is Not Enough: A Simple yet Effective Watermark Removal Attack for DNN Models
Shangwei Guo
Tianwei Zhang
Han Qiu
Yi Zeng
Tao Xiang
Yang Liu
AAML
14
33
0
18 Sep 2020
Weight Poisoning Attacks on Pre-trained Models
Keita Kurita
Paul Michel
Graham Neubig
AAML
SILM
28
432
0
14 Apr 2020
Backdooring and Poisoning Neural Networks with Image-Scaling Attacks
Erwin Quiring
Konrad Rieck
AAML
46
70
0
19 Mar 2020
Towards Probabilistic Verification of Machine Unlearning
David M. Sommer
Liwei Song
Sameer Wagh
Prateek Mittal
AAML
11
71
0
09 Mar 2020
1