Practical Attacks on Machine Learning: A Case Study on Adversarial Windows Malware

Abstract
While machine learning is vulnerable to adversarial examples, it still lacks systematic procedures and tools for evaluating its security in different application contexts. In this article, we discuss how to develop automated and scalable security evaluations of machine learning using practical attacks, reporting a use case on Windows malware detection.
View on arXivComments on this paper