ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1707.05970
130
48
v1v2v3v4v5 (latest)

Generic Black-Box End-to-End Attack against RNNs and Other API Calls Based Malware Classifiers

19 July 2017
Ishai Rosenberg
A. Shabtai
Lior Rokach
Yuval Elovici
    AAML
ArXiv (abs)PDFHTML
Abstract

Deep neural networks are being used to solve complex classification problems, in which other machine learning classifiers, such as SVM, fall short. Recurrent Neural Networks (RNNs) have been used for tasks that involves sequential inputs, like speech to text. In the cyber security domain, RNNs based on API calls have been able to classify unsigned malware better than other classifiers. In this paper we present a black-box attack against RNNs, focusing on finding adversarial API call sequences that would be misclassified by a RNN without affecting the malware functionality. We also show that the this attack is effective against many classifiers, due-to the transferability principle between RNN variants, feed-forward DNNs and state-of-the-art traditional machine learning classifiers. Finally, we introduce the transferability by transitivity principle, causing an attack against generalized classifier like RNN variants to be transferable to less generalized classifiers like feed-forward DNNs. We conclude by discussing possible defense mechanisms.

View on arXiv
Comments on this paper