ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.09963
14
42

Defense Methods Against Adversarial Examples for Recurrent Neural Networks

28 January 2019
Ishai Rosenberg
A. Shabtai
Yuval Elovici
Lior Rokach
    AAML
    GAN
ArXivPDFHTML
Abstract

Adversarial examples are known to mislead deep learning models to incorrectly classify them, even in domains where such models achieve state-of-the-art performance. Until recently, research on both attack and defense methods focused on image recognition, primarily using convolutional neural networks (CNNs). In recent years, adversarial example generation methods for recurrent neural networks (RNNs) have been published, demonstrating that RNN classifiers are also vulnerable to such attacks. In this paper, we present a novel defense method, termed sequence squeezing, to make RNN classifiers more robust against such attacks. Our method differs from previous defense methods which were designed only for non-sequence based models. We also implement four additional RNN defense methods inspired by recently published CNN defense methods. We evaluate our methods against state-of-the-art attacks in the cyber security domain where real adversaries (malware developers) exist, but our methods can be applied against other discrete sequence based adversarial attacks, e.g., in the NLP domain. Using our methods we were able to decrease the effectiveness of such attack from 99.9% to 15%.

View on arXiv
Comments on this paper