ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.02175
  4. Cited By
Adversarial Examples Are Not Bugs, They Are Features
v1v2v3v4 (latest)

Adversarial Examples Are Not Bugs, They Are Features

Neural Information Processing Systems (NeurIPS), 2019
6 May 2019
Andrew Ilyas
Shibani Santurkar
Dimitris Tsipras
Logan Engstrom
Brandon Tran
Aleksander Madry
    SILM
ArXiv (abs)PDFHTML

Papers citing "Adversarial Examples Are Not Bugs, They Are Features"

50 / 1,093 papers shown
Adversarial Robust Training of Deep Learning MRI Reconstruction Models
Adversarial Robust Training of Deep Learning MRI Reconstruction ModelsMachine Learning for Biomedical Imaging (MLBI), 2020
Francesco Calivá
Kaiyang Cheng
Rutwik Shah
V. Pedoia
OODAAMLMedIm
309
13
0
30 Oct 2020
Understanding the Failure Modes of Out-of-Distribution Generalization
Understanding the Failure Modes of Out-of-Distribution GeneralizationInternational Conference on Learning Representations (ICLR), 2020
Vaishnavh Nagarajan
Anders Andreassen
Behnam Neyshabur
OODOODD
378
195
0
29 Oct 2020
Transferable Universal Adversarial Perturbations Using Generative Models
Transferable Universal Adversarial Perturbations Using Generative Models
Atiyeh Hashemi
Andreas Bär
S. Mozaffari
Tim Fingscheidt
AAML
194
19
0
28 Oct 2020
Robustness May Be at Odds with Fairness: An Empirical Study on
  Class-wise Accuracy
Robustness May Be at Odds with Fairness: An Empirical Study on Class-wise Accuracy
Philipp Benz
Chaoning Zhang
Adil Karjauv
In So Kweon
AAML
274
67
0
26 Oct 2020
Exemplary Natural Images Explain CNN Activations Better than
  State-of-the-Art Feature Visualization
Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization
Judy Borowski
Roland S. Zimmermann
Judith Schepers
Robert Geirhos
Thomas S. A. Wallis
Matthias Bethge
Wieland Brendel
FAtt
262
7
0
23 Oct 2020
Adversarial Robustness of Supervised Sparse Coding
Adversarial Robustness of Supervised Sparse Coding
Jeremias Sulam
Ramchandran Muthumukar
R. Arora
AAML
254
25
0
22 Oct 2020
Contrastive Learning with Adversarial Examples
Contrastive Learning with Adversarial Examples
Chih-Hui Ho
Nuno Vasconcelos
SSL
183
159
0
22 Oct 2020
Boosting Gradient for White-Box Adversarial Attacks
Boosting Gradient for White-Box Adversarial Attacks
Hongying Liu
Zhenyu Zhou
Fanhua Shang
Xiaoyu Qi
Yuanyuan Liu
L. Jiao
AAML
123
9
0
21 Oct 2020
VenoMave: Targeted Poisoning Against Speech Recognition
VenoMave: Targeted Poisoning Against Speech Recognition
H. Aghakhani
Lea Schonherr
Thorsten Eisenhofer
D. Kolossa
Thorsten Holz
Christopher Kruegel
Giovanni Vigna
AAML
294
20
0
21 Oct 2020
Towards Understanding the Dynamics of the First-Order Adversaries
Towards Understanding the Dynamics of the First-Order AdversariesInternational Conference on Machine Learning (ICML), 2020
Zhun Deng
Hangfeng He
Jiaoyang Huang
Weijie J. Su
AAML
111
11
0
20 Oct 2020
Data-driven Identification of 2D Partial Differential Equations using
  extracted physical features
Data-driven Identification of 2D Partial Differential Equations using extracted physical featuresComputer Methods in Applied Mechanics and Engineering (CMAME), 2020
Kazem Meidani
A. Farimani
121
18
0
20 Oct 2020
Verifying the Causes of Adversarial Examples
Verifying the Causes of Adversarial ExamplesInternational Conference on Pattern Recognition (ICPR), 2020
Honglin Li
Yifei Fan
F. Ganz
A. Yezzi
Payam Barnaghi
AAML
166
4
0
19 Oct 2020
Optimism in the Face of Adversity: Understanding and Improving Deep
  Learning through Adversarial Robustness
Optimism in the Face of Adversity: Understanding and Improving Deep Learning through Adversarial RobustnessProceedings of the IEEE (Proc. IEEE), 2020
Guillermo Ortiz-Jiménez
Apostolos Modas
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
AAML
356
50
0
19 Oct 2020
Poisoned classifiers are not only backdoored, they are fundamentally
  broken
Poisoned classifiers are not only backdoored, they are fundamentally broken
Mingjie Sun
Siddhant Agarwal
J. Zico Kolter
266
26
0
18 Oct 2020
GreedyFool: Multi-Factor Imperceptibility and Its Application to
  Designing a Black-box Adversarial Attack
GreedyFool: Multi-Factor Imperceptibility and Its Application to Designing a Black-box Adversarial Attack
Hui Liu
Bo Zhao
Minzhi Ji
Peng Liu
AAML
213
8
0
14 Oct 2020
To be Robust or to be Fair: Towards Fairness in Adversarial Training
To be Robust or to be Fair: Towards Fairness in Adversarial Training
Han Xu
Xiaorui Liu
Yaxin Li
Anil K. Jain
Shucheng Zhou
253
209
0
13 Oct 2020
Open-sourced Dataset Protection via Backdoor Watermarking
Open-sourced Dataset Protection via Backdoor Watermarking
Yiming Li
Zi-Mou Zhang
Jiawang Bai
Baoyuan Wu
Yong Jiang
Shutao Xia
235
46
0
12 Oct 2020
The Risks of Invariant Risk Minimization
The Risks of Invariant Risk Minimization
Elan Rosenfeld
Pradeep Ravikumar
Andrej Risteski
OOD
403
342
0
12 Oct 2020
Diagnosing and Preventing Instabilities in Recurrent Video Processing
Diagnosing and Preventing Instabilities in Recurrent Video ProcessingIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020
T. Tanay
Aivar Sootla
Matteo Maggioni
P. Dokania
Juil Sock
A. Leonardis
Greg Slabaugh
349
7
0
10 Oct 2020
Understanding Local Robustness of Deep Neural Networks under Natural
  Variations
Understanding Local Robustness of Deep Neural Networks under Natural Variations
Ziyuan Zhong
Yuchi Tian
Baishakhi Ray
AAML
181
1
0
09 Oct 2020
A Unified Approach to Interpreting and Boosting Adversarial
  Transferability
A Unified Approach to Interpreting and Boosting Adversarial Transferability
Xin Eric Wang
Jie Ren
Shuyu Lin
Xiangming Zhu
Yisen Wang
Quanshi Zhang
AAML
393
106
0
08 Oct 2020
Improve Adversarial Robustness via Weight Penalization on Classification
  Layer
Improve Adversarial Robustness via Weight Penalization on Classification Layer
Cong Xu
Dan Li
Min Yang
AAML
120
4
0
08 Oct 2020
Batch Normalization Increases Adversarial Vulnerability and Decreases
  Adversarial Transferability: A Non-Robust Feature Perspective
Batch Normalization Increases Adversarial Vulnerability and Decreases Adversarial Transferability: A Non-Robust Feature Perspective
Philipp Benz
Chaoning Zhang
In So Kweon
AAML
249
47
0
07 Oct 2020
Adversarial attacks on audio source separation
Adversarial attacks on audio source separation
Naoya Takahashi
S. Inoue
Yuki Mitsufuji
AAML
187
10
0
07 Oct 2020
Do Wider Neural Networks Really Help Adversarial Robustness?
Do Wider Neural Networks Really Help Adversarial Robustness?Neural Information Processing Systems (NeurIPS), 2020
Boxi Wu
Jinghui Chen
Deng Cai
Xiaofei He
Quanquan Gu
AAML
408
105
0
03 Oct 2020
DVERGE: Diversifying Vulnerabilities for Enhanced Robust Generation of
  Ensembles
DVERGE: Diversifying Vulnerabilities for Enhanced Robust Generation of Ensembles
Huanrui Yang
Jingyang Zhang
Hongliang Dong
Nathan Inkawhich
Andrew B. Gardner
Andrew Touchet
Wesley Wilkes
Heath Berry
Xue Yang
AAML
178
126
0
30 Sep 2020
Generating End-to-End Adversarial Examples for Malware Classifiers Using
  Explainability
Generating End-to-End Adversarial Examples for Malware Classifiers Using Explainability
Ishai Rosenberg
Shai Meir
J. Berrebi
I. Gordon
Guillaume Sicard
Eli David
AAMLSILM
159
30
0
28 Sep 2020
Beneficial Perturbations Network for Defending Adversarial Examples
Beneficial Perturbations Network for Defending Adversarial Examples
Shixian Wen
A. Rios
Laurent Itti
AAML
152
1
0
27 Sep 2020
Beneficial Perturbation Network for designing general adaptive
  artificial intelligence systems
Beneficial Perturbation Network for designing general adaptive artificial intelligence systemsIEEE Transactions on Neural Networks and Learning Systems (IEEE TNNLS), 2020
Shixian Wen
A. Rios
Yunhao Ge
Laurent Itti
OODAAML
211
18
0
27 Sep 2020
A Unifying Review of Deep and Shallow Anomaly Detection
A Unifying Review of Deep and Shallow Anomaly DetectionProceedings of the IEEE (Proc. IEEE), 2020
Lukas Ruff
Jacob R. Kauffmann
Robert A. Vandermeulen
G. Montavon
Wojciech Samek
Matthias Kirchler
Thomas G. Dietterich
Klaus-Robert Muller
UQCV
606
937
0
24 Sep 2020
Tailoring: encoding inductive biases by optimizing unsupervised
  objectives at prediction time
Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction timeNeural Information Processing Systems (NeurIPS), 2020
Ferran Alet
Maria Bauza
Kenji Kawaguchi
Nurullah Giray Kuru
Tomas Lozano-Perez
L. Kaelbling
AI4CE
293
16
0
22 Sep 2020
Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations
Stereopagnosia: Fooling Stereo Networks with Adversarial PerturbationsAAAI Conference on Artificial Intelligence (AAAI), 2020
A. Wong
Mukund Mundhra
Stefano Soatto
AAML
363
33
0
21 Sep 2020
Adversarial Training with Stochastic Weight Average
Adversarial Training with Stochastic Weight AverageInternational Conference on Information Photonics (ICIP), 2020
Joong-won Hwang
Youngwan Lee
Sungchan Oh
Yuseok Bae
OODAAML
166
12
0
21 Sep 2020
The Intriguing Relation Between Counterfactual Explanations and
  Adversarial Examples
The Intriguing Relation Between Counterfactual Explanations and Adversarial ExamplesMinds and Machines (MM), 2020
Timo Freiesleben
GAN
503
72
0
11 Sep 2020
Understanding the Role of Individual Units in a Deep Neural Network
Understanding the Role of Individual Units in a Deep Neural NetworkProceedings of the National Academy of Sciences of the United States of America (PNAS), 2020
David Bau
Jun-Yan Zhu
Hendrik Strobelt
Àgata Lapedriza
Bolei Zhou
Antonio Torralba
GAN
290
498
0
10 Sep 2020
Second Order Optimization for Adversarial Robustness and
  Interpretability
Second Order Optimization for Adversarial Robustness and Interpretability
Theodoros Tsiligkaridis
Jay Roberts
AAML
106
9
0
10 Sep 2020
Quantifying the Preferential Direction of the Model Gradient in
  Adversarial Training With Projected Gradient Descent
Quantifying the Preferential Direction of the Model Gradient in Adversarial Training With Projected Gradient DescentPattern Recognition (Pattern Recognit.), 2020
Ricardo Bigolin Lanfredi
Joyce D. Schroeder
Tolga Tasdizen
290
14
0
10 Sep 2020
Adversarial Machine Learning in Image Classification: A Survey Towards
  the Defender's Perspective
Adversarial Machine Learning in Image Classification: A Survey Towards the Defender's PerspectiveACM Computing Surveys (ACM CSUR), 2020
G. R. Machado
Eugênio Silva
R. Goldschmidt
AAML
256
183
0
08 Sep 2020
Bluff: Interactively Deciphering Adversarial Attacks on Deep Neural
  Networks
Bluff: Interactively Deciphering Adversarial Attacks on Deep Neural NetworksVisual .. (VISUAL), 2020
Nilaksh Das
Haekyu Park
Zijie J. Wang
Fred Hohman
Robert Firstman
Emily Rogers
Duen Horng Chau
AAML
132
28
0
05 Sep 2020
Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp
  Adversarial Attacks
Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp Adversarial AttacksNeural Information Processing Systems (NeurIPS), 2020
Wei-An Lin
Chun Pong Lau
Alexander Levine
Ramalingam Chellappa
Soheil Feizi
AAML
264
64
0
05 Sep 2020
A Wholistic View of Continual Learning with Deep Neural Networks:
  Forgotten Lessons and the Bridge to Active and Open World Learning
A Wholistic View of Continual Learning with Deep Neural Networks: Forgotten Lessons and the Bridge to Active and Open World LearningNeural Networks (NN), 2020
Martin Mundt
Yongjun Hong
Iuliia Pliushch
Visvanathan Ramesh
CLL
394
173
0
03 Sep 2020
Puzzle-AE: Novelty Detection in Images through Solving Puzzles
Puzzle-AE: Novelty Detection in Images through Solving Puzzles
Mohammadreza Salehi
Ainaz Eftekhar
Niousha Sadjadi
M. Rohban
Hamid R. Rabiee
AAML
575
45
0
29 Aug 2020
Minimal Adversarial Examples for Deep Learning on 3D Point Clouds
Minimal Adversarial Examples for Deep Learning on 3D Point CloudsIEEE International Conference on Computer Vision (ICCV), 2020
Jaeyeon Kim
Binh-Son Hua
D. Nguyen
Sai-Kit Yeung
3DPC
246
70
0
27 Aug 2020
Likelihood Landscapes: A Unifying Principle Behind Many Adversarial
  Defenses
Likelihood Landscapes: A Unifying Principle Behind Many Adversarial Defenses
Fu-Huei Lin
Rohit Mittapalli
Prithvijit Chattopadhyay
Daniel Bolya
Judy Hoffman
AAML
160
2
0
25 Aug 2020
A Survey on Assessing the Generalization Envelope of Deep Neural
  Networks: Predictive Uncertainty, Out-of-distribution and Adversarial Samples
A Survey on Assessing the Generalization Envelope of Deep Neural Networks: Predictive Uncertainty, Out-of-distribution and Adversarial Samples
Julia Lust
Alexandru Paul Condurache
UQCVAAMLAI4CE
219
8
0
21 Aug 2020
Addressing Neural Network Robustness with Mixup and Targeted Labeling
  Adversarial Training
Addressing Neural Network Robustness with Mixup and Targeted Labeling Adversarial Training
Alfred Laugros
A. Caplier
Matthieu Ospici
AAML
214
19
0
19 Aug 2020
Adversarial Concurrent Training: Optimizing Robustness and Accuracy
  Trade-off of Deep Neural Networks
Adversarial Concurrent Training: Optimizing Robustness and Accuracy Trade-off of Deep Neural Networks
Elahe Arani
F. Sarfraz
Bahram Zonooz
AAML
168
11
0
16 Aug 2020
Optimizing Information Loss Towards Robust Neural Networks
Optimizing Information Loss Towards Robust Neural Networks
Philip Sperl
Konstantin Böttinger
AAML
162
3
0
07 Aug 2020
Adversarial Examples on Object Recognition: A Comprehensive Survey
Adversarial Examples on Object Recognition: A Comprehensive SurveyACM Computing Surveys (ACM CSUR), 2020
A. Serban
E. Poll
Joost Visser
AAML
420
80
0
07 Aug 2020
Assessing the (Un)Trustworthiness of Saliency Maps for Localizing
  Abnormalities in Medical Imaging
Assessing the (Un)Trustworthiness of Saliency Maps for Localizing Abnormalities in Medical ImagingmedRxiv (medRxiv), 2020
N. Arun
N. Gaw
P. Singh
Ken Chang
M. Aggarwal
...
J. Patel
M. Gidwani
Julius Adebayo
M. D. Li
Jayashree Kalpathy-Cramer
FAtt
336
114
0
06 Aug 2020
Previous
123...171819202122
Next
Page 18 of 22
Pageof 22