ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1604.08275
  4. Cited By
Crafting Adversarial Input Sequences for Recurrent Neural Networks

Crafting Adversarial Input Sequences for Recurrent Neural Networks

28 April 2016
Nicolas Papernot
Patrick McDaniel
A. Swami
Richard E. Harang
    AAMLGANSILM
ArXiv (abs)PDFHTML

Papers citing "Crafting Adversarial Input Sequences for Recurrent Neural Networks"

50 / 206 papers shown
Title
Reprogramming Language Models for Molecular Representation Learning
Reprogramming Language Models for Molecular Representation Learning
R. Vinod
Pin-Yu Chen
Payel Das
AAMLOOD
107
15
0
07 Dec 2020
Incorporating Hidden Layer representation into Adversarial Attacks and
  Defences
Incorporating Hidden Layer representation into Adversarial Attacks and Defences
Haojing Shen
Sihong Chen
Ran Wang
Xizhao Wang
AAML
55
0
0
28 Nov 2020
CoCo: Controllable Counterfactuals for Evaluating Dialogue State
  Trackers
CoCo: Controllable Counterfactuals for Evaluating Dialogue State Trackers
Shiyang Li
Semih Yavuz
Kazuma Hashimoto
Jia Li
Tong Niu
Nazneen Rajani
Xifeng Yan
Yingbo Zhou
Caiming Xiong
99
62
0
24 Oct 2020
Rewriting Meaningful Sentences via Conditional BERT Sampling and an application on fooling text classifiers
Lei Xu
Ivan Ramirez
K. Veeramachaneni
AAML
22
2
0
22 Oct 2020
Pair the Dots: Jointly Examining Training History and Test Stimuli for
  Model Interpretability
Pair the Dots: Jointly Examining Training History and Test Stimuli for Model Interpretability
Yuxian Meng
Chun Fan
Zijun Sun
Eduard H. Hovy
Leilei Gan
Jiwei Li
FAtt
78
10
0
14 Oct 2020
Poison Attacks against Text Datasets with Conditional Adversarially
  Regularized Autoencoder
Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder
Alvin Chan
Yi Tay
Yew-Soon Ong
Aston Zhang
SILM
78
57
0
06 Oct 2020
Adversarial Attack and Defense of Structured Prediction Models
Adversarial Attack and Defense of Structured Prediction Models
Wenjuan Han
Liwen Zhang
Yong Jiang
Kewei Tu
AAML
66
39
0
04 Oct 2020
STRATA: Simple, Gradient-Free Attacks for Models of Code
STRATA: Simple, Gradient-Free Attacks for Models of Code
Jacob Mitchell Springer
Bryn Reinstadler
Una-May O’Reilly
AAML
35
9
0
28 Sep 2020
Improving Robustness and Generality of NLP Models Using Disentangled
  Representations
Improving Robustness and Generality of NLP Models Using Disentangled Representations
Jiawei Wu
Xiaoya Li
Xiang Ao
Yuxian Meng
Leilei Gan
Jiwei Li
OODDRL
29
11
0
21 Sep 2020
Learning to Attack: Towards Textual Adversarial Attacking in Real-world
  Situations
Learning to Attack: Towards Textual Adversarial Attacking in Real-world Situations
Yuan Zang
Bairu Hou
Fanchao Qi
Zhiyuan Liu
Xiaojun Meng
Maosong Sun
60
11
0
19 Sep 2020
OpenAttack: An Open-source Textual Adversarial Attack Toolkit
OpenAttack: An Open-source Textual Adversarial Attack Toolkit
Guoyang Zeng
Fanchao Qi
Qianrui Zhou
Ting Zhang
Zixian Ma
Bairu Hou
Yuan Zang
Zhiyuan Liu
Maosong Sun
AAML
197
125
0
19 Sep 2020
Dynamically Computing Adversarial Perturbations for Recurrent Neural
  Networks
Dynamically Computing Adversarial Perturbations for Recurrent Neural Networks
Shankar A. Deka
D. Stipanović
Claire Tomlin
AAML
42
7
0
07 Sep 2020
On the Generalization Properties of Adversarial Training
On the Generalization Properties of Adversarial Training
Yue Xing
Qifan Song
Guang Cheng
AAML
76
34
0
15 Aug 2020
Adversarial Training with Fast Gradient Projection Method against
  Synonym Substitution based Text Attacks
Adversarial Training with Fast Gradient Projection Method against Synonym Substitution based Text Attacks
Xiaosen Wang
Yichen Yang
Yihe Deng
Kun He
OODAAML
33
3
0
09 Aug 2020
Adversarial Machine Learning Attacks and Defense Methods in the Cyber
  Security Domain
Adversarial Machine Learning Attacks and Defense Methods in the Cyber Security Domain
Ishai Rosenberg
A. Shabtai
Yuval Elovici
Lior Rokach
AAML
78
12
0
05 Jul 2020
Differentiable Language Model Adversarial Attacks on Categorical
  Sequence Classifiers
Differentiable Language Model Adversarial Attacks on Categorical Sequence Classifiers
I. Fursov
A. Zaytsev
Nikita Klyuchnikov
A. Kravchenko
Evgeny Burnaev
AAMLSILM
43
5
0
19 Jun 2020
Adversarial Attacks and Defense on Texts: A Survey
Adversarial Attacks and Defense on Texts: A Survey
A. Huq
Mst. Tasnim Pervin
AAML
147
21
0
28 May 2020
Scalable Polyhedral Verification of Recurrent Neural Networks
Scalable Polyhedral Verification of Recurrent Neural Networks
Wonryong Ryou
Jiayu Chen
Mislav Balunović
Gagandeep Singh
Andrei Dan
Martin Vechev
77
31
0
27 May 2020
Learning Robust Models for e-Commerce Product Search
Learning Robust Models for e-Commerce Product Search
Thanh V. Nguyen
Nikhil S. Rao
Karthik Subbian
CMLNoLaOOD
50
20
0
07 May 2020
Reevaluating Adversarial Examples in Natural Language
Reevaluating Adversarial Examples in Natural Language
John X. Morris
Eli Lifland
Jack Lanchantin
Yangfeng Ji
Yanjun Qi
SILMAAML
180
114
0
25 Apr 2020
Weight Poisoning Attacks on Pre-trained Models
Weight Poisoning Attacks on Pre-trained Models
Keita Kurita
Paul Michel
Graham Neubig
AAMLSILM
140
456
0
14 Apr 2020
Frequency-Guided Word Substitutions for Detecting Textual Adversarial
  Examples
Frequency-Guided Word Substitutions for Detecting Textual Adversarial Examples
Maximilian Mozes
Pontus Stenetorp
Bennett Kleinberg
Lewis D. Griffin
AAML
175
103
0
13 Apr 2020
Generating Natural Language Adversarial Examples on a Large Scale with
  Generative Models
Generating Natural Language Adversarial Examples on a Large Scale with Generative Models
Yankun Ren
J. Lin
Siliang Tang
Jun Zhou
Shuang Yang
Yuan Qi
Xiang Ren
GANAAMLSILM
59
23
0
10 Mar 2020
Gradient-based adversarial attacks on categorical sequence models via
  traversing an embedded world
Gradient-based adversarial attacks on categorical sequence models via traversing an embedded world
I. Fursov
Alexey Zaytsev
Nikita Klyuchnikov
A. Kravchenko
Evgeny Burnaev
AAMLSILM
46
11
0
09 Mar 2020
Adversarial Machine Learning: Bayesian Perspectives
Adversarial Machine Learning: Bayesian Perspectives
D. Insua
Roi Naveiro
Víctor Gallego
Jason Poulos
AAML
27
21
0
07 Mar 2020
Robustness Verification for Transformers
Robustness Verification for Transformers
Zhouxing Shi
Huan Zhang
Kai-Wei Chang
Minlie Huang
Cho-Jui Hsieh
AAML
81
109
0
16 Feb 2020
Deep Learning for Source Code Modeling and Generation: Models,
  Applications and Challenges
Deep Learning for Source Code Modeling and Generation: Models, Applications and Challenges
T. H. Le
Hao Chen
Muhammad Ali Babar
VLM
141
155
0
13 Feb 2020
Predictive Power of Nearest Neighbors Algorithm under Random
  Perturbation
Predictive Power of Nearest Neighbors Algorithm under Random Perturbation
Yue Xing
Qifan Song
Guang Cheng
29
6
0
13 Feb 2020
Adversarial Robustness for Code
Adversarial Robustness for Code
Pavol Bielik
Martin Vechev
AAML
66
88
0
11 Feb 2020
FastWordBug: A Fast Method To Generate Adversarial Text Against NLP
  Applications
FastWordBug: A Fast Method To Generate Adversarial Text Against NLP Applications
Dou Goodman
Zhonghou Lv
Minghua Wang
AAML
57
6
0
31 Jan 2020
To Transfer or Not to Transfer: Misclassification Attacks Against
  Transfer Learned Text Classifiers
To Transfer or Not to Transfer: Misclassification Attacks Against Transfer Learned Text Classifiers
Bijeeta Pal
Shruti Tople
AAML
75
9
0
08 Jan 2020
Generating Semantic Adversarial Examples via Feature Manipulation
Generating Semantic Adversarial Examples via Feature Manipulation
Shuo Wang
Surya Nepal
Carsten Rudolph
M. Grobler
Shangyu Chen
Tianle Chen
AAML
78
12
0
06 Jan 2020
T3: Tree-Autoencoder Constrained Adversarial Text Generation for
  Targeted Attack
T3: Tree-Autoencoder Constrained Adversarial Text Generation for Targeted Attack
Wei Ping
Hengzhi Pei
Boyuan Pan
Han Liu
Shuohang Wang
Yangqiu Song
AAML
57
6
0
22 Dec 2019
Explainability and Adversarial Robustness for RNNs
Explainability and Adversarial Robustness for RNNs
Alexander Hartl
Maximilian Bachl
J. Fabini
Tanja Zseby
AAML
57
32
0
20 Dec 2019
Fastened CROWN: Tightened Neural Network Robustness Certificates
Fastened CROWN: Tightened Neural Network Robustness Certificates
Zhaoyang Lyu
Ching-Yun Ko
Zhifeng Kong
Ngai Wong
Dahua Lin
Luca Daniel
146
67
0
02 Dec 2019
Towards Security Threats of Deep Learning Systems: A Survey
Towards Security Threats of Deep Learning Systems: A Survey
Yingzhe He
Guozhu Meng
Kai Chen
Xingbo Hu
Jinwen He
AAMLELM
56
14
0
28 Nov 2019
RNN-Test: Towards Adversarial Testing for Recurrent Neural Network
  Systems
RNN-Test: Towards Adversarial Testing for Recurrent Neural Network Systems
Jianmin Guo
Yue Zhao
Xueying Han
Yu Jiang
AAML
62
13
0
11 Nov 2019
Coverage Guided Testing for Recurrent Neural Networks
Coverage Guided Testing for Recurrent Neural Networks
Wei Huang
Youcheng Sun
Xing-E. Zhao
James Sharp
Wenjie Ruan
Jie Meng
Xiaowei Huang
AAML
122
48
0
05 Nov 2019
Adversarial Music: Real World Audio Adversary Against Wake-word
  Detection System
Adversarial Music: Real World Audio Adversary Against Wake-word Detection System
Juncheng Billy Li
Shuhui Qu
Xinjian Li
Joseph Szurley
J. Zico Kolter
Florian Metze
AAML
69
67
0
31 Oct 2019
A Unified Framework for Data Poisoning Attack to Graph-based
  Semi-supervised Learning
A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning
Xuanqing Liu
Si Si
Xiaojin Zhu
Yang Li
Cho-Jui Hsieh
AAML
87
79
0
30 Oct 2019
Universal Adversarial Perturbation for Text Classification
Universal Adversarial Perturbation for Text Classification
Hang Gao
Tim Oates
AAML
108
15
0
10 Oct 2019
GAMIN: An Adversarial Approach to Black-Box Model Inversion
GAMIN: An Adversarial Approach to Black-Box Model Inversion
Ulrich Aïvodji
Sébastien Gambs
Timon Ther
MLAU
73
42
0
26 Sep 2019
FENCE: Feasible Evasion Attacks on Neural Networks in Constrained
  Environments
FENCE: Feasible Evasion Attacks on Neural Networks in Constrained Environments
Alesia Chernikova
Alina Oprea
AAML
73
40
0
23 Sep 2019
Natural Language Adversarial Defense through Synonym Encoding
Natural Language Adversarial Defense through Synonym Encoding
Xiaosen Wang
Hao Jin
Yichen Yang
Kun He
AAML
85
64
0
15 Sep 2019
Effectiveness of Adversarial Examples and Defenses for Malware
  Classification
Effectiveness of Adversarial Examples and Defenses for Malware Classification
Robert Podschwadt
Hassan Takabi
AAML
44
11
0
10 Sep 2019
Natural Adversarial Sentence Generation with Gradient-based Perturbation
Natural Adversarial Sentence Generation with Gradient-based Perturbation
Yu-Lun Hsieh
Minhao Cheng
Da-Cheng Juan
Wei Wei
W. Hsu
Cho-Jui Hsieh
AAML
47
2
0
06 Sep 2019
Robustness to Modification with Shared Words in Paraphrase
  Identification
Robustness to Modification with Shared Words in Paraphrase Identification
Zhouxing Shi
Minlie Huang
108
1
0
05 Sep 2019
Universal Adversarial Triggers for Attacking and Analyzing NLP
Universal Adversarial Triggers for Attacking and Analyzing NLP
Eric Wallace
Shi Feng
Nikhil Kandpal
Matt Gardner
Sameer Singh
AAMLSILM
118
877
0
20 Aug 2019
Neural Network Verification for the Masses (of AI graduates)
Neural Network Verification for the Masses (of AI graduates)
Ekaterina Komendantskaya
Rob Stewart
Kirsy Duncan
Daniel Kienitz
Pierre Le Hen
Pascal Bacchus
16
0
0
02 Jul 2019
Robustness Guarantees for Deep Neural Networks on Videos
Robustness Guarantees for Deep Neural Networks on Videos
Min Wu
Marta Z. Kwiatkowska
AAML
81
23
0
28 Jun 2019
Previous
12345
Next