ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.13409
  4. Cited By
Bypassing Backdoor Detection Algorithms in Deep Learning

Bypassing Backdoor Detection Algorithms in Deep Learning

31 May 2019
T. Tan
Reza Shokri
    FedML
    AAML
ArXivPDFHTML

Papers citing "Bypassing Backdoor Detection Algorithms in Deep Learning"

20 / 20 papers shown
Title
Everyone Can Attack: Repurpose Lossy Compression as a Natural Backdoor
  Attack
Everyone Can Attack: Repurpose Lossy Compression as a Natural Backdoor Attack
Sze Jue Yang
Q. Nguyen
Chee Seng Chan
Khoa D. Doan
AAML
DiffM
29
0
0
31 Aug 2023
Attacks in Adversarial Machine Learning: A Systematic Survey from the
  Life-cycle Perspective
Attacks in Adversarial Machine Learning: A Systematic Survey from the Life-cycle Perspective
Baoyuan Wu
Zihao Zhu
Li Liu
Qingshan Liu
Zhaofeng He
Siwei Lyu
AAML
44
21
0
19 Feb 2023
Backdoor Learning for NLP: Recent Advances, Challenges, and Future
  Research Directions
Backdoor Learning for NLP: Recent Advances, Challenges, and Future Research Directions
Marwan Omar
SILM
AAML
28
20
0
14 Feb 2023
Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural
  Networks with Neuromorphic Data
Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural Networks with Neuromorphic Data
Gorka Abad
Oguzhan Ersoy
S. Picek
A. Urbieta
AAML
23
17
0
13 Feb 2023
SoK: A Systematic Evaluation of Backdoor Trigger Characteristics in
  Image Classification
SoK: A Systematic Evaluation of Backdoor Trigger Characteristics in Image Classification
Gorka Abad
Jing Xu
Stefanos Koffas
Behrad Tajalli
S. Picek
Mauro Conti
AAML
63
5
0
03 Feb 2023
Universal Soldier: Using Universal Adversarial Perturbations for
  Detecting Backdoor Attacks
Universal Soldier: Using Universal Adversarial Perturbations for Detecting Backdoor Attacks
Xiaoyun Xu
Oguzhan Ersoy
S. Picek
AAML
21
2
0
01 Feb 2023
Membership Inference Attacks Against Semantic Segmentation Models
Membership Inference Attacks Against Semantic Segmentation Models
Tomás Chobola
Dmitrii Usynin
Georgios Kaissis
MIACV
24
6
0
02 Dec 2022
Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class
Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class
Khoa D. Doan
Yingjie Lao
Ping Li
34
40
0
17 Oct 2022
A Survey of PPG's Application in Authentication
A Survey of PPG's Application in Authentication
Lin Li
Chao Chen
Lei Pan
Leo Yu Zhang
Zhifeng Wang
Jun Zhang
Yang Xiang
23
13
0
27 Jan 2022
FIBA: Frequency-Injection based Backdoor Attack in Medical Image
  Analysis
FIBA: Frequency-Injection based Backdoor Attack in Medical Image Analysis
Yu Feng
Benteng Ma
Jing Zhang
Shanshan Zhao
Yong-quan Xia
Dacheng Tao
AAML
34
84
0
02 Dec 2021
Backdoor Attack through Frequency Domain
Backdoor Attack through Frequency Domain
Tong Wang
Yuan Yao
Feng Xu
Shengwei An
Hanghang Tong
Ting Wang
AAML
20
33
0
22 Nov 2021
Quantization Backdoors to Deep Learning Commercial Frameworks
Quantization Backdoors to Deep Learning Commercial Frameworks
Hua Ma
Huming Qiu
Yansong Gao
Zhi-Li Zhang
A. Abuadbba
Minhui Xue
Anmin Fu
Jiliang Zhang
S. Al-Sarawi
Derek Abbott
MQ
32
19
0
20 Aug 2021
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised
  Learning
BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning
Jinyuan Jia
Yupei Liu
Neil Zhenqiang Gong
SILM
SSL
24
151
0
01 Aug 2021
SPECTRE: Defending Against Backdoor Attacks Using Robust Statistics
SPECTRE: Defending Against Backdoor Attacks Using Robust Statistics
J. Hayase
Weihao Kong
Raghav Somani
Sewoong Oh
AAML
19
149
0
22 Apr 2021
TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural
  Backdoors
TrojanZoo: Towards Unified, Holistic, and Practical Evaluation of Neural Backdoors
Ren Pang
Zheng-Wei Zhang
Xiangshan Gao
Zhaohan Xi
S. Ji
Peng Cheng
Xiapu Luo
Ting Wang
AAML
27
31
0
16 Dec 2020
Privacy and Robustness in Federated Learning: Attacks and Defenses
Privacy and Robustness in Federated Learning: Attacks and Defenses
Lingjuan Lyu
Han Yu
Xingjun Ma
Chen Chen
Lichao Sun
Jun Zhao
Qiang Yang
Philip S. Yu
FedML
180
355
0
07 Dec 2020
Fine-tuning Is Not Enough: A Simple yet Effective Watermark Removal
  Attack for DNN Models
Fine-tuning Is Not Enough: A Simple yet Effective Watermark Removal Attack for DNN Models
Shangwei Guo
Tianwei Zhang
Han Qiu
Yi Zeng
Tao Xiang
Yang Liu
AAML
14
33
0
18 Sep 2020
Weight Poisoning Attacks on Pre-trained Models
Weight Poisoning Attacks on Pre-trained Models
Keita Kurita
Paul Michel
Graham Neubig
AAML
SILM
28
432
0
14 Apr 2020
Backdooring and Poisoning Neural Networks with Image-Scaling Attacks
Backdooring and Poisoning Neural Networks with Image-Scaling Attacks
Erwin Quiring
Konrad Rieck
AAML
48
70
0
19 Mar 2020
Towards Probabilistic Verification of Machine Unlearning
Towards Probabilistic Verification of Machine Unlearning
David M. Sommer
Liwei Song
Sameer Wagh
Prateek Mittal
AAML
11
71
0
09 Mar 2020
1