ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.02175
  4. Cited By
Adversarial Examples Are Not Bugs, They Are Features
v1v2v3v4 (latest)

Adversarial Examples Are Not Bugs, They Are Features

Neural Information Processing Systems (NeurIPS), 2019
6 May 2019
Andrew Ilyas
Shibani Santurkar
Dimitris Tsipras
Logan Engstrom
Brandon Tran
Aleksander Madry
    SILM
ArXiv (abs)PDFHTML

Papers citing "Adversarial Examples Are Not Bugs, They Are Features"

50 / 1,093 papers shown
An Adversarial Approach for Explaining the Predictions of Deep Neural
  Networks
An Adversarial Approach for Explaining the Predictions of Deep Neural Networks
Arash Rahnama
A.-Yu Tseng
FAttAAMLFaML
282
5
0
20 May 2020
Feature Purification: How Adversarial Training Performs Robust Deep
  Learning
Feature Purification: How Adversarial Training Performs Robust Deep Learning
Zeyuan Allen-Zhu
Yuanzhi Li
MLTAAML
411
167
0
20 May 2020
Identifying Statistical Bias in Dataset Replication
Identifying Statistical Bias in Dataset Replication
Logan Engstrom
Andrew Ilyas
Shibani Santurkar
Dimitris Tsipras
Jacob Steinhardt
Aleksander Madry
194
54
0
19 May 2020
Provable Robust Classification via Learned Smoothed Densities
Provable Robust Classification via Learned Smoothed Densities
Saeed Saremi
R. Srivastava
AAML
174
3
0
09 May 2020
Blind Backdoors in Deep Learning Models
Blind Backdoors in Deep Learning Models
Eugene Bagdasaryan
Vitaly Shmatikov
AAMLFedMLSILM
456
352
0
08 May 2020
Towards Frequency-Based Explanation for Robust CNN
Towards Frequency-Based Explanation for Robust CNN
Zifan Wang
Yilin Yang
Ankit Shrivastava
Varun Rawal
Zihao Ding
AAMLFAtt
177
53
0
06 May 2020
A neural network walks into a lab: towards using deep nets as models for
  human behavior
A neural network walks into a lab: towards using deep nets as models for human behavior
Wei-Ying Ma
B. Peters
HAIAI4CE
196
59
0
02 May 2020
Does Data Augmentation Improve Generalization in NLP?
Does Data Augmentation Improve Generalization in NLP?
Rohan Jha
Charles Lovering
Ellie Pavlick
210
10
0
30 Apr 2020
"Call me sexist, but...": Revisiting Sexism Detection Using
  Psychological Scales and Adversarial Samples
"Call me sexist, but...": Revisiting Sexism Detection Using Psychological Scales and Adversarial SamplesInternational Conference on Web and Social Media (ICWSM), 2020
Mattia Samory
Indira Sen
Julian Kohne
Fabian Flöck
Claudia Wagner
269
92
0
27 Apr 2020
Towards Accurate and Robust Domain Adaptation under Noisy Environments
Towards Accurate and Robust Domain Adaptation under Noisy EnvironmentsInternational Joint Conference on Artificial Intelligence (IJCAI), 2020
Zhongyi Han
Xian-Jin Gui
C. Cui
Yilong Yin
OOD
78
29
0
27 Apr 2020
Adversarial Attacks and Defenses: An Interpretation Perspective
Adversarial Attacks and Defenses: An Interpretation Perspective
Ninghao Liu
Mengnan Du
Ruocheng Guo
Huan Liu
Helen Zhou
AAML
211
8
0
23 Apr 2020
A Neural Scaling Law from the Dimension of the Data Manifold
A Neural Scaling Law from the Dimension of the Data Manifold
Utkarsh Sharma
Jared Kaplan
270
61
0
22 Apr 2020
Provably robust deep generative models
Provably robust deep generative models
Filipe Condessa
Zico Kolter
AAMLOOD
162
5
0
22 Apr 2020
Dynamic Knowledge Graph-based Dialogue Generation with Improved
  Adversarial Meta-Learning
Dynamic Knowledge Graph-based Dialogue Generation with Improved Adversarial Meta-Learning
Hongcai Xu
J. Bao
Gaojie Zhang
184
8
0
19 Apr 2020
Shortcut Learning in Deep Neural Networks
Shortcut Learning in Deep Neural NetworksNature Machine Intelligence (NMI), 2020
Robert Geirhos
J. Jacobsen
Claudio Michaelis
R. Zemel
Wieland Brendel
Matthias Bethge
Felix Wichmann
1.0K
2,457
0
16 Apr 2020
Adversarial Robustness Guarantees for Random Deep Neural Networks
Adversarial Robustness Guarantees for Random Deep Neural NetworksInternational Conference on Machine Learning (ICML), 2020
Giacomo De Palma
B. Kiani
S. Lloyd
AAMLOOD
145
10
0
13 Apr 2020
Luring of transferable adversarial perturbations in the black-box
  paradigm
Luring of transferable adversarial perturbations in the black-box paradigm
Rémi Bernhard
Pierre-Alain Moëllic
J. Dutertre
AAML
171
2
0
10 Apr 2020
Transferable, Controllable, and Inconspicuous Adversarial Attacks on
  Person Re-identification With Deep Mis-Ranking
Transferable, Controllable, and Inconspicuous Adversarial Attacks on Person Re-identification With Deep Mis-RankingComputer Vision and Pattern Recognition (CVPR), 2020
Hongjun Wang
Guangrun Wang
Ya Li
Dongyu Zhang
Liang Lin
AAML
147
90
0
08 Apr 2020
Understanding (Non-)Robust Feature Disentanglement and the Relationship
  Between Low- and High-Dimensional Adversarial Attacks
Understanding (Non-)Robust Feature Disentanglement and the Relationship Between Low- and High-Dimensional Adversarial Attacks
Zuowen Wang
Leo Horne
AAML
118
0
0
04 Apr 2020
SOAR: Second-Order Adversarial Regularization
SOAR: Second-Order Adversarial Regularization
A. Ma
Fartash Faghri
Nicolas Papernot
Amir-massoud Farahmand
AAML
128
4
0
04 Apr 2020
Evading Deepfake-Image Detectors with White- and Black-Box Attacks
Evading Deepfake-Image Detectors with White- and Black-Box Attacks
Nicholas Carlini
Hany Farid
AAML
214
173
0
01 Apr 2020
M2m: Imbalanced Classification via Major-to-minor Translation
M2m: Imbalanced Classification via Major-to-minor TranslationComputer Vision and Pattern Recognition (CVPR), 2020
Jaehyung Kim
Jongheon Jeong
Jinwoo Shin
329
259
0
01 Apr 2020
Towards Deep Learning Models Resistant to Large Perturbations
Towards Deep Learning Models Resistant to Large Perturbations
Amirreza Shaeiri
Rozhin Nobahari
M. Rohban
OODAAML
192
14
0
30 Mar 2020
Can you hear me $\textit{now}$? Sensitive comparisons of human and
  machine perception
Can you hear me now\textit{now}now? Sensitive comparisons of human and machine perceptionCognitive Sciences (CogSci), 2020
Michael A. Lepori
C. Firestone
AAML
225
10
0
27 Mar 2020
Going in circles is the way forward: the role of recurrence in visual
  inference
Going in circles is the way forward: the role of recurrence in visual inferenceCurrent Opinion in Neurobiology (Curr Opin Neurobiol), 2020
R. S. V. Bergen
N. Kriegeskorte
313
90
0
26 Mar 2020
Understanding the robustness of deep neural network classifiers for
  breast cancer screening
Understanding the robustness of deep neural network classifiers for breast cancer screening
Witold Oleszkiewicz
Taro Makino
Stanislaw Jastrzebski
Tomasz Trzciñski
Linda Moy
Dong Wang
Laura Heacock
Krzysztof J. Geras
126
1
0
23 Mar 2020
One Neuron to Fool Them All
One Neuron to Fool Them All
Anshuman Suri
David Evans
AAML
196
4
0
20 Mar 2020
Adversarial Examples and the Deeper Riddle of Induction: The Need for a
  Theory of Artifacts in Deep Learning
Adversarial Examples and the Deeper Riddle of Induction: The Need for a Theory of Artifacts in Deep Learning
Cameron Buckner
AAML
62
3
0
20 Mar 2020
Overinterpretation reveals image classification model pathologies
Overinterpretation reveals image classification model pathologiesNeural Information Processing Systems (NeurIPS), 2020
Brandon Carter
Siddhartha Jain
Jonas W. Mueller
David K Gifford
FAtt
259
55
0
19 Mar 2020
Vulnerabilities of Connectionist AI Applications: Evaluation and Defence
Vulnerabilities of Connectionist AI Applications: Evaluation and DefenceFrontiers in Big Data (Front. Big Data), 2020
Christian Berghoff
Matthias Neu
Arndt von Twickel
AAML
206
26
0
18 Mar 2020
Adversarial Transferability in Wearable Sensor Systems
Adversarial Transferability in Wearable Sensor Systems
Ramesh Kumar Sah
H. Ghasemzadeh
AAML
115
6
0
17 Mar 2020
Heat and Blur: An Effective and Fast Defense Against Adversarial
  Examples
Heat and Blur: An Effective and Fast Defense Against Adversarial Examples
Haya Brama
Tal Grinshpoun
AAML
203
9
0
17 Mar 2020
On the benefits of defining vicinal distributions in latent space
On the benefits of defining vicinal distributions in latent spacePattern Recognition Letters (Pattern Recognit. Lett.), 2020
Puneet Mangla
Vedant Singh
Shreyas Jayant Havaldar
V. Balasubramanian
AAML
211
4
0
14 Mar 2020
ARAE: Adversarially Robust Training of Autoencoders Improves Novelty
  Detection
ARAE: Adversarially Robust Training of Autoencoders Improves Novelty DetectionNeural Networks (NN), 2020
Mohammadreza Salehi
Atrin Arya
Barbod Pajoum
Mohammad Otoofi
Amirreza Shaeiri
M. Rohban
Hamid R. Rabiee
AAML
234
66
0
12 Mar 2020
Adversarial Vertex Mixup: Toward Better Adversarially Robust
  Generalization
Adversarial Vertex Mixup: Toward Better Adversarially Robust GeneralizationComputer Vision and Pattern Recognition (CVPR), 2020
Saehyung Lee
Hyungyu Lee
Sungroh Yoon
AAML
523
136
0
05 Mar 2020
Metrics and methods for robustness evaluation of neural networks with
  generative models
Metrics and methods for robustness evaluation of neural networks with generative modelsMachine-mediated learning (ML), 2020
Igor Buzhinsky
Arseny Nerinovsky
S. Tripakis
AAML
268
27
0
04 Mar 2020
What's the relationship between CNNs and communication systems?
What's the relationship between CNNs and communication systems?
Hao Ge
X. Tu
Yanxiang Gong
M. Xie
Zheng Ma
93
0
0
03 Mar 2020
Out-of-Distribution Generalization via Risk Extrapolation (REx)
Out-of-Distribution Generalization via Risk Extrapolation (REx)International Conference on Machine Learning (ICML), 2020
David M. Krueger
Ethan Caballero
J. Jacobsen
Amy Zhang
Jonathan Binas
Dinghuai Zhang
Rémi Le Priol
Aaron Courville
OOD
925
1,099
0
02 Mar 2020
Learning Adversarially Robust Representations via Worst-Case Mutual
  Information Maximization
Learning Adversarially Robust Representations via Worst-Case Mutual Information MaximizationInternational Conference on Machine Learning (ICML), 2020
Sicheng Zhu
Xiao Zhang
David Evans
SSLOOD
257
29
0
26 Feb 2020
Revisiting Ensembles in an Adversarial Context: Improving Natural
  Accuracy
Revisiting Ensembles in an Adversarial Context: Improving Natural Accuracy
Aditya Saligrama
Guillaume Leclerc
AAML
48
1
0
26 Feb 2020
Randomization matters. How to defend against strong adversarial attacks
Randomization matters. How to defend against strong adversarial attacksInternational Conference on Machine Learning (ICML), 2020
Rafael Pinot
Raphael Ettedgui
Geovani Rizk
Y. Chevaleyre
Jamal Atif
AAML
278
62
0
26 Feb 2020
Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Attacks Which Do Not Kill Training Make Adversarial Learning StrongerInternational Conference on Machine Learning (ICML), 2020
Jingfeng Zhang
Xilie Xu
Bo Han
Gang Niu
Li-zhen Cui
Masashi Sugiyama
Mohan S. Kankanhalli
AAML
166
441
0
26 Feb 2020
The Curious Case of Adversarially Robust Models: More Data Can Help,
  Double Descend, or Hurt Generalization
The Curious Case of Adversarially Robust Models: More Data Can Help, Double Descend, or Hurt GeneralizationConference on Uncertainty in Artificial Intelligence (UAI), 2020
Yifei Min
Lin Chen
Amin Karbasi
AAML
262
72
0
25 Feb 2020
Gödel's Sentence Is An Adversarial Example But Unsolvable
Gödel's Sentence Is An Adversarial Example But Unsolvable
Xiaodong Qi
Lansheng Han
AAML
159
0
0
25 Feb 2020
Towards Backdoor Attacks and Defense in Robust Machine Learning Models
Towards Backdoor Attacks and Defense in Robust Machine Learning ModelsComputers & security (CS), 2020
E. Soremekun
Sakshi Udeshi
Sudipta Chattopadhyay
AAML
238
14
0
25 Feb 2020
UnMask: Adversarial Detection and Defense Through Robust Feature
  Alignment
UnMask: Adversarial Detection and Defense Through Robust Feature Alignment
Scott Freitas
Shang-Tse Chen
Zijie J. Wang
Duen Horng Chau
AAML
201
32
0
21 Feb 2020
Boosting Adversarial Training with Hypersphere Embedding
Boosting Adversarial Training with Hypersphere EmbeddingNeural Information Processing Systems (NeurIPS), 2020
Tianyu Pang
Xiao Yang
Yinpeng Dong
Kun Xu
Jun Zhu
Hang Su
AAML
371
161
0
20 Feb 2020
Hold me tight! Influence of discriminative features on deep network
  boundaries
Hold me tight! Influence of discriminative features on deep network boundariesNeural Information Processing Systems (NeurIPS), 2020
Guillermo Ortiz-Jiménez
Apostolos Modas
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
AAML
196
51
0
15 Feb 2020
Recurrent Attention Model with Log-Polar Mapping is Robust against
  Adversarial Attacks
Recurrent Attention Model with Log-Polar Mapping is Robust against Adversarial Attacks
Taro Kiritani
Koji Ono
AAML
135
3
0
13 Feb 2020
CEB Improves Model Robustness
CEB Improves Model RobustnessEntropy (Entropy), 2020
Ian S. Fischer
Alexander A. Alemi
AAML
226
32
0
13 Feb 2020
Previous
123...19202122
Next