ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.01031
  4. Cited By
Explanation-Guided Backdoor Poisoning Attacks Against Malware
  Classifiers
v1v2v3 (latest)

Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers

2 March 2020
Giorgio Severi
J. Meyer
Scott E. Coull
Alina Oprea
    AAMLSILM
ArXiv (abs)PDFHTML

Papers citing "Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers"

9 / 9 papers shown
Towards A Proactive ML Approach for Detecting Backdoor Poison Samples
Towards A Proactive ML Approach for Detecting Backdoor Poison SamplesUSENIX Security Symposium (USENIX Security), 2022
Xiangyu Qi
Tinghao Xie
Jiachen T. Wang
Tong Wu
Saeed Mahloujifar
Prateek Mittal
AAML
459
76
0
26 May 2022
Towards Practical Deployment-Stage Backdoor Attack on Deep Neural
  Networks
Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks
Xiangyu Qi
Tinghao Xie
Ruizhe Pan
Jifeng Zhu
Yong-Liang Yang
Kai Bu
AAML
317
77
0
25 Nov 2021
ML-based IoT Malware Detection Under Adversarial Settings: A Systematic
  Evaluation
ML-based IoT Malware Detection Under Adversarial Settings: A Systematic Evaluation
Ahmed A. Abusnaina
Afsah Anwar
Sultan Alshamrani
Abdulrahman Alabduljabbar
Rhongho Jang
Daehun Nyang
David A. Mohaisen
AAML
153
1
0
30 Aug 2021
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
  and Defenses
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and DefensesIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
Basel Alomair
Aleksander Madry
Yue Liu
Tom Goldstein
SILM
604
382
0
18 Dec 2020
Being Single Has Benefits. Instance Poisoning to Deceive Malware
  Classifiers
Being Single Has Benefits. Instance Poisoning to Deceive Malware Classifiers
T. Shapira
David Berend
Ishai Rosenberg
Yang Liu
A. Shabtai
Yuval Elovici
AAML
158
5
0
30 Oct 2020
Local and Central Differential Privacy for Robustness and Privacy in
  Federated Learning
Local and Central Differential Privacy for Robustness and Privacy in Federated LearningNetwork and Distributed System Security Symposium (NDSS), 2020
Mohammad Naseri
Jamie Hayes
Emiliano De Cristofaro
FedML
383
215
0
08 Sep 2020
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive
  Review
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
Yansong Gao
Bao Gia Doan
Zhi-Li Zhang
Siqi Ma
Jiliang Zhang
Anmin Fu
Surya Nepal
Hyoungshick Kim
AAML
400
276
0
21 Jul 2020
You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion
You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion
R. Schuster
Congzheng Song
Eran Tromer
Vitaly Shmatikov
SILMAAML
537
196
0
05 Jul 2020
Backdoors in Neural Models of Source Code
Backdoors in Neural Models of Source CodeInternational Conference on Pattern Recognition (ICPR), 2020
Goutham Ramakrishnan
Aws Albarghouthi
AAMLSILM
381
73
0
11 Jun 2020
1
Page 1 of 1