Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1704.08006
Cited By
Deep Text Classification Can be Fooled
26 April 2017
Bin Liang
Hongcheng Li
Miaoqiang Su
Pan Bian
Xirong Li
Wenchang Shi
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Deep Text Classification Can be Fooled"
8 / 58 papers shown
Title
Discrete Adversarial Attacks and Submodular Optimization with Applications to Text Classification
Qi Lei
Lingfei Wu
Pin-Yu Chen
A. Dimakis
Inderjit S. Dhillon
Michael Witbrock
AAML
15
92
0
01 Dec 2018
Evading classifiers in discrete domains with provable optimality guarantees
B. Kulynych
Jamie Hayes
N. Samarin
Carmela Troncoso
AAML
13
19
0
25 Oct 2018
Attack Graph Convolutional Networks by Adding Fake Nodes
Xiaoyun Wang
Minhao Cheng
Joe Eaton
Cho-Jui Hsieh
S. F. Wu
AAML
GNN
25
78
0
25 Oct 2018
Detecting egregious responses in neural sequence-to-sequence models
Tianxing He
James R. Glass
AAML
21
22
0
11 Sep 2018
Adversarial Over-Sensitivity and Over-Stability Strategies for Dialogue Models
Tong Niu
Joey Tianyi Zhou
AAML
21
85
0
06 Sep 2018
Adversarial Texts with Gradient Methods
Zhitao Gong
Wenlu Wang
Yangqiu Song
D. Song
Wei-Shinn Ku
AAML
20
77
0
22 Jan 2018
Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers
Ji Gao
Jack Lanchantin
M. Soffa
Yanjun Qi
AAML
18
706
0
13 Jan 2018
Towards Crafting Text Adversarial Samples
Suranjana Samanta
S. Mehta
AAML
14
219
0
10 Jul 2017
Previous
1
2