Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2003.01031
Cited By
Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers
2 March 2020
Giorgio Severi
J. Meyer
Scott E. Coull
Alina Oprea
AAML
SILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers"
9 / 9 papers shown
Title
Towards A Proactive ML Approach for Detecting Backdoor Poison Samples
Xiangyu Qi
Tinghao Xie
Jiachen T. Wang
Tong Wu
Saeed Mahloujifar
Prateek Mittal
AAML
14
47
0
26 May 2022
Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks
Xiangyu Qi
Tinghao Xie
Ruizhe Pan
Jifeng Zhu
Yong-Liang Yang
Kai Bu
AAML
25
57
0
25 Nov 2021
ML-based IoT Malware Detection Under Adversarial Settings: A Systematic Evaluation
Ahmed A. Abusnaina
Afsah Anwar
Sultan Alshamrani
Abdulrahman Alabduljabbar
Rhongho Jang
Daehun Nyang
David A. Mohaisen
AAML
20
1
0
30 Aug 2021
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
D. Song
A. Madry
Bo-wen Li
Tom Goldstein
SILM
13
270
0
18 Dec 2020
Being Single Has Benefits. Instance Poisoning to Deceive Malware Classifiers
T. Shapira
David Berend
Ishai Rosenberg
Yang Liu
A. Shabtai
Yuval Elovici
AAML
16
4
0
30 Oct 2020
Local and Central Differential Privacy for Robustness and Privacy in Federated Learning
Mohammad Naseri
Jamie Hayes
Emiliano De Cristofaro
FedML
25
144
0
08 Sep 2020
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
Yansong Gao
Bao Gia Doan
Zhi-Li Zhang
Siqi Ma
Jiliang Zhang
Anmin Fu
Surya Nepal
Hyoungshick Kim
AAML
27
220
0
21 Jul 2020
You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion
R. Schuster
Congzheng Song
Eran Tromer
Vitaly Shmatikov
SILM
AAML
19
150
0
05 Jul 2020
Backdoors in Neural Models of Source Code
Goutham Ramakrishnan
Aws Albarghouthi
AAML
SILM
28
56
0
11 Jun 2020
1