Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1905.02175
Cited By
Adversarial Examples Are Not Bugs, They Are Features
6 May 2019
Andrew Ilyas
Shibani Santurkar
Dimitris Tsipras
Logan Engstrom
Brandon Tran
A. Madry
SILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Adversarial Examples Are Not Bugs, They Are Features"
50 / 297 papers shown
Title
Saliency Guided Adversarial Training for Learning Generalizable Features with Applications to Medical Imaging Classification System
Xin Li
Yao Qiang
Chengyin Li
Sijia Liu
D. Zhu
OOD
MedIm
29
4
0
09 Sep 2022
Unrestricted Adversarial Samples Based on Non-semantic Feature Clusters Substitution
Ming-Kuai Zhou
Xiaobing Pei
AAML
16
0
0
31 Aug 2022
Shortcut Learning of Large Language Models in Natural Language Understanding
Mengnan Du
Fengxiang He
Na Zou
Dacheng Tao
Xia Hu
KELM
OffRL
28
84
0
25 Aug 2022
Improving Adversarial Robustness via Mutual Information Estimation
Dawei Zhou
Nannan Wang
Xinbo Gao
Bo Han
Xiaoyu Wang
Yibing Zhan
Tongliang Liu
AAML
16
15
0
25 Jul 2022
Can we achieve robustness from data alone?
Nikolaos Tsilivis
Jingtong Su
Julia Kempe
OOD
DD
36
18
0
24 Jul 2022
Decorrelative Network Architecture for Robust Electrocardiogram Classification
Christopher Wiedeman
Ge Wang
OOD
13
2
0
19 Jul 2022
On the Robustness and Anomaly Detection of Sparse Neural Networks
Morgane Ayle
Bertrand Charpentier
John Rachwan
Daniel Zügner
Simon Geisler
Stephan Günnemann
AAML
52
3
0
09 Jul 2022
How many perturbations break this model? Evaluating robustness beyond adversarial accuracy
R. Olivier
Bhiksha Raj
AAML
29
5
0
08 Jul 2022
Removing Batch Normalization Boosts Adversarial Training
Haotao Wang
Aston Zhang
Shuai Zheng
Xingjian Shi
Mu Li
Zhangyang Wang
32
41
0
04 Jul 2022
Auditing Visualizations: Transparency Methods Struggle to Detect Anomalous Behavior
Jean-Stanislas Denain
Jacob Steinhardt
AAML
33
7
0
27 Jun 2022
Robustness Evaluation of Deep Unsupervised Learning Algorithms for Intrusion Detection Systems
D'Jeff K. Nkashama
Ariana Soltani
Jean-Charles Verdier
Marc Frappier
Pierre-Marting Tardif
F. Kabanza
OOD
AAML
21
5
0
25 Jun 2022
On the Role of Generalization in Transferability of Adversarial Examples
Yilin Wang
Farzan Farnia
AAML
24
10
0
18 Jun 2022
A Meta-Analysis of Distributionally-Robust Models
Ben Feuer
Ameya Joshi
C. Hegde
OOD
VLM
35
3
0
15 Jun 2022
SeATrans: Learning Segmentation-Assisted diagnosis model via Transformer
Junde Wu
Huihui Fang
Fangxin Shang
Dalu Yang
Zhao-Yang Wang
Jing Gao
Yehui Yang
Yanwu Xu
MedIm
ViT
17
19
0
12 Jun 2022
Meet You Halfway: Explaining Deep Learning Mysteries
Oriel BenShmuel
AAML
FedML
FAtt
OOD
22
0
0
09 Jun 2022
Adversarial Reprogramming Revisited
Matthias Englert
R. Lazic
AAML
21
8
0
07 Jun 2022
Building Robust Ensembles via Margin Boosting
Dinghuai Zhang
Hongyang R. Zhang
Aaron Courville
Yoshua Bengio
Pradeep Ravikumar
A. Suggala
AAML
UQCV
40
15
0
07 Jun 2022
Vanilla Feature Distillation for Improving the Accuracy-Robustness Trade-Off in Adversarial Training
Guodong Cao
Zhibo Wang
Xiaowei Dong
Zhifei Zhang
Hengchang Guo
Zhan Qin
Kui Ren
AAML
24
1
0
05 Jun 2022
Optimizing Relevance Maps of Vision Transformers Improves Robustness
Hila Chefer
Idan Schwartz
Lior Wolf
ViT
32
37
0
02 Jun 2022
Exact Feature Collisions in Neural Networks
Utku Ozbulak
Manvel Gasparyan
Shodhan Rao
W. D. Neve
Arnout Van Messem
AAML
19
1
0
31 May 2022
One-Pixel Shortcut: on the Learning Preference of Deep Neural Networks
Shutong Wu
Sizhe Chen
Cihang Xie
X. Huang
AAML
42
27
0
24 May 2022
Sample Complexity Bounds for Robustly Learning Decision Lists against Evasion Attacks
Pascale Gourdeau
Varun Kanade
Marta Z. Kwiatkowska
J. Worrell
AAML
13
5
0
12 May 2022
How Does Frequency Bias Affect the Robustness of Neural Image Classifiers against Common Corruption and Adversarial Perturbations?
Alvin Chan
Yew-Soon Ong
Clement Tan
AAML
22
13
0
09 May 2022
Adversarial Training for High-Stakes Reliability
Daniel M. Ziegler
Seraphina Nix
Lawrence Chan
Tim Bauman
Peter Schmidt-Nielsen
...
Noa Nabeshima
Benjamin Weinstein-Raun
D. Haas
Buck Shlegeris
Nate Thomas
AAML
30
59
0
03 May 2022
SSR-GNNs: Stroke-based Sketch Representation with Graph Neural Networks
Sheng Cheng
Yi Ren
Yezhou Yang
56
2
0
27 Apr 2022
On Fragile Features and Batch Normalization in Adversarial Training
Nils Philipp Walter
David Stutz
Bernt Schiele
AAML
19
5
0
26 Apr 2022
Synthesizing Adversarial Visual Scenarios for Model-Based Robotic Control
Shubhankar Agarwal
Sandeep P. Chinchali
AAML
27
4
0
13 Apr 2022
Self-Supervised Losses for One-Class Textual Anomaly Detection
Kimberly T. Mai
Toby O. Davies
Lewis D. Griffin
12
7
0
12 Apr 2022
Anti-Adversarially Manipulated Attributions for Weakly Supervised Semantic Segmentation and Object Localization
Jungbeom Lee
Eunji Kim
J. Mok
Sung-Hoon Yoon
WSOL
40
29
0
11 Apr 2022
Investigating Top-
k
k
k
White-Box and Transferable Black-box Attack
Chaoning Zhang
Philipp Benz
Adil Karjauv
Jae-Won Cho
Kang Zhang
In So Kweon
31
42
0
30 Mar 2022
Invariance Learning based on Label Hierarchy
S. Toyota
Kenji Fukumizu
OOD
19
1
0
29 Mar 2022
A Survey of Robust Adversarial Training in Pattern Recognition: Fundamental, Theory, and Methodologies
Zhuang Qian
Kaizhu Huang
Qiufeng Wang
Xu-Yao Zhang
OOD
AAML
ObjD
49
71
0
26 Mar 2022
The Dark Side: Security Concerns in Machine Learning for EDA
Zhiyao Xie
Jingyu Pan
Chen-Chia Chang
Yiran Chen
6
4
0
20 Mar 2022
Attacking deep networks with surrogate-based adversarial black-box methods is easy
Nicholas A. Lord
Romain Mueller
Luca Bertinetto
AAML
MLAU
19
24
0
16 Mar 2022
LAS-AT: Adversarial Training with Learnable Attack Strategy
Xiaojun Jia
Yong Zhang
Baoyuan Wu
Ke Ma
Jue Wang
Xiaochun Cao
AAML
47
131
0
13 Mar 2022
AutoGPart: Intermediate Supervision Search for Generalizable 3D Part Segmentation
Xueyi Liu
Xiaomeng Xu
Anyi Rao
Chuang Gan
L. Yi
3DPC
24
14
0
13 Mar 2022
Why adversarial training can hurt robust accuracy
Jacob Clarysse
Julia Hörrmann
Fanny Yang
AAML
13
18
0
03 Mar 2022
Limitations of Deep Learning for Inverse Problems on Digital Hardware
Holger Boche
Adalbert Fono
Gitta Kutyniok
24
25
0
28 Feb 2022
LPF-Defense: 3D Adversarial Defense based on Frequency Analysis
Hanieh Naderi
Kimia Noorbakhsh
Arian Etemadi
S. Kasaei
AAML
14
12
0
23 Feb 2022
Robustness and Accuracy Could Be Reconcilable by (Proper) Definition
Tianyu Pang
Min-Bin Lin
Xiao Yang
Junyi Zhu
Shuicheng Yan
19
119
0
21 Feb 2022
D4: Detection of Adversarial Diffusion Deepfakes Using Disjoint Ensembles
Ashish Hooda
Neal Mangaokar
Ryan Feng
Kassem Fawaz
S. Jha
Atul Prakash
24
11
0
11 Feb 2022
Adversarially Robust Models may not Transfer Better: Sufficient Conditions for Domain Transferability from the View of Regularization
Xiaojun Xu
Jacky Y. Zhang
Evelyn Ma
Danny Son
Oluwasanmi Koyejo
Bo-wen Li
20
10
0
03 Feb 2022
Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks
A. Harrington
Arturo Deza
OOD
AAML
24
20
0
02 Feb 2022
Datamodels: Predicting Predictions from Training Data
Andrew Ilyas
Sung Min Park
Logan Engstrom
Guillaume Leclerc
A. Madry
TDI
38
130
0
01 Feb 2022
Closer Look at the Transferability of Adversarial Examples: How They Fool Different Models Differently
Futa Waseda
Sosuke Nishikawa
Trung-Nghia Le
H. Nguyen
Isao Echizen
SILM
28
35
0
29 Dec 2021
DeepAdversaries: Examining the Robustness of Deep Learning Models for Galaxy Morphology Classification
A. Ćiprijanović
Diana Kafkes
Gregory F. Snyder
F. Sánchez
G. Perdue
K. Pedro
Brian D. Nord
Sandeep Madireddy
Stefan M. Wild
AAML
34
15
0
28 Dec 2021
How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial Robustness?
Xinhsuai Dong
Anh Tuan Luu
Min-Bin Lin
Shuicheng Yan
Hanwang Zhang
SILM
AAML
20
55
0
22 Dec 2021
Measure and Improve Robustness in NLP Models: A Survey
Xuezhi Wang
Haohan Wang
Diyi Yang
139
130
0
15 Dec 2021
On the Impact of Hard Adversarial Instances on Overfitting in Adversarial Training
Chen Liu
Zhichao Huang
Mathieu Salzmann
Tong Zhang
Sabine Süsstrunk
AAML
15
13
0
14 Dec 2021
Image classifiers can not be made robust to small perturbations
Zheng Dai
David K Gifford
VLM
AAML
16
1
0
07 Dec 2021
Previous
1
2
3
4
5
6
Next