Feature Conservation in Adversarial Classifier Evasion: A Case Study
- AAMLOOD
Machine learning is widely used in security applications, particularly in the form of statistical classification aimed at distinguishing benign from malicious entities. Recent research has shown that such classifiers are often vulnerable to evasion attacks, whereby adversaries change behavior to be categorized as benign while preserving malicious functionality. Research into evasion attacks has followed two paradigms: attacks in problem space, where the actual malicious instance is modified, and attacks in feature space, where the attack is abstracted into modifying numerical features of an instance to evade a classifier. In contrast, research into designing evasion-robust classifiers generally relies on feature space attack models. We make several contributions to address this gap, using PDF malware detection as a case study. First, we present a systematic retraining procedure which uses an automated problem space attack generator to design a more robust PDF malware detector. Second, we demonstrate that replacing problem space attacks with feature space attacks dramatically reduces the robustness of the resulting classifier, severely undermining feature space defense methods to date. Third, we demonstrate the existence of conserved (or invariant) features, and show how these can be leveraged to design evasion- robust classifiers that are nearly as effective, and far more efficient, than those relying on the problem space attack. Finally, we present a general approach for identifying conserved features.
View on arXiv