ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1808.01688
  4. Cited By
Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the
  Robustness of 18 Deep Image Classification Models
v1v2 (latest)

Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the Robustness of 18 Deep Image Classification Models

5 August 2018
D. Su
Huan Zhang
Hongge Chen
Jinfeng Yi
Pin-Yu Chen
Yupeng Gao
    VLM
ArXiv (abs)PDFHTMLGithub (98★)

Papers citing "Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the Robustness of 18 Deep Image Classification Models"

31 / 181 papers shown
Adversarial Explanations for Understanding Image Classification
  Decisions and Improved Neural Network Robustness
Adversarial Explanations for Understanding Image Classification Decisions and Improved Neural Network RobustnessNature Machine Intelligence (NMI), 2019
Walt Woods
Jack H Chen
C. Teuscher
AAML
238
49
0
07 Jun 2019
Architecture Selection via the Trade-off Between Accuracy and Robustness
Architecture Selection via the Trade-off Between Accuracy and Robustness
Zhun Deng
Cynthia Dwork
Jialiang Wang
Yao-Min Zhao
AAML
238
5
0
04 Jun 2019
Purifying Adversarial Perturbation with Adversarially Trained
  Auto-encoders
Purifying Adversarial Perturbation with Adversarially Trained Auto-encoders
Hebi Li
Qi Xiao
Shixin Tian
Jin Tian
AAML
151
4
0
26 May 2019
Enhancing Adversarial Defense by k-Winners-Take-All
Enhancing Adversarial Defense by k-Winners-Take-AllInternational Conference on Learning Representations (ICLR), 2019
Chang Xiao
Peilin Zhong
Changxi Zheng
AAML
279
112
0
25 May 2019
Adversarially robust transfer learning
Adversarially robust transfer learningInternational Conference on Learning Representations (ICLR), 2019
Ali Shafahi
Parsa Saadatpanah
Chen Zhu
Amin Ghiasi
Christoph Studer
David Jacobs
Tom Goldstein
OOD
142
128
0
20 May 2019
What Do Adversarially Robust Models Look At?
What Do Adversarially Robust Models Look At?
Takahiro Itazuri
Yoshihiro Fukuhara
Hirokatsu Kataoka
Shigeo Morishima
104
5
0
19 May 2019
Exploring the Hyperparameter Landscape of Adversarial Robustness
Exploring the Hyperparameter Landscape of Adversarial Robustness
Evelyn Duesterwald
Anupama Murthi
Ganesh Venkataraman
M. Sinn
Deepak Vijaykeerthy
AAML
111
7
0
09 May 2019
Batch Normalization is a Cause of Adversarial Vulnerability
Batch Normalization is a Cause of Adversarial Vulnerability
A. Galloway
A. Golubeva
T. Tanay
M. Moussa
Graham W. Taylor
ODLAAML
239
84
0
06 May 2019
Defensive Quantization: When Efficiency Meets Robustness
Defensive Quantization: When Efficiency Meets Robustness
Ji Lin
Chuang Gan
Song Han
MQ
257
211
0
17 Apr 2019
Interpreting Adversarial Examples with Attributes
Interpreting Adversarial Examples with Attributes
Sadaf Gulshad
J. H. Metzen
A. Smeulders
Zeynep Akata
FAttAAML
196
6
0
17 Apr 2019
Evaluating Robustness of Deep Image Super-Resolution against Adversarial
  Attacks
Evaluating Robustness of Deep Image Super-Resolution against Adversarial Attacks
Jun-Ho Choi
Huan Zhang
Jun-Hyuk Kim
Cho-Jui Hsieh
Jong-Seok Lee
AAMLSupR
179
80
0
12 Apr 2019
Adversarial Attacks against Deep Saliency Models
Adversarial Attacks against Deep Saliency Models
Zhaohui Che
Ali Borji
Guangtao Zhai
Suiyi Ling
G. Guo
P. Le Callet
AAML
109
6
0
02 Apr 2019
Bridging Adversarial Robustness and Gradient Interpretability
Bridging Adversarial Robustness and Gradient Interpretability
Beomsu Kim
Junghoon Seo
Taegyun Jeon
AAML
203
41
0
27 Mar 2019
Towards Understanding Adversarial Examples Systematically: Exploring
  Data Size, Task and Model Factors
Towards Understanding Adversarial Examples Systematically: Exploring Data Size, Task and Model Factors
Ke Sun
Zhanxing Zhu
Zhouchen Lin
AAML
150
19
0
28 Feb 2019
Adversarial Attack and Defense on Point Sets
Adversarial Attack and Defense on Point Sets
Jiancheng Yang
Qiang Zhang
Rongyao Fang
Bingbing Ni
Jinxian Liu
Qi Tian
3DPC
209
145
0
28 Feb 2019
When Causal Intervention Meets Adversarial Examples and Image Masking
  for Deep Neural Networks
When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural NetworksInternational Conference on Information Photonics (ICIP), 2019
Chao-Han Huck Yang
Yi-Chieh Liu
Pin-Yu Chen
Xiaoli Ma
Y. Tsai
BDLAAMLCML
188
21
0
09 Feb 2019
Theoretically Principled Trade-off between Robustness and Accuracy
Theoretically Principled Trade-off between Robustness and Accuracy
Hongyang R. Zhang
Yaodong Yu
Jiantao Jiao
Eric Xing
L. Ghaoui
Sai Li
735
2,861
0
24 Jan 2019
The Limitations of Adversarial Training and the Blind-Spot Attack
The Limitations of Adversarial Training and the Blind-Spot Attack
Huan Zhang
Hongge Chen
Zhao Song
Duane S. Boning
Inderjit S. Dhillon
Cho-Jui Hsieh
AAML
202
154
0
15 Jan 2019
Face Hallucination Revisited: An Exploratory Study on Dataset Bias
Face Hallucination Revisited: An Exploratory Study on Dataset Bias
Klemen Grm
Martin Pernuš
Leo Cluzel
Walter J. Scheirer
Simon Dobrišek
Vitomir Štruc
CVBM
240
11
0
21 Dec 2018
Disentangling Adversarial Robustness and Generalization
Disentangling Adversarial Robustness and Generalization
David Stutz
Matthias Hein
Bernt Schiele
AAMLOOD
645
305
0
03 Dec 2018
Discrete Adversarial Attacks and Submodular Optimization with
  Applications to Text Classification
Discrete Adversarial Attacks and Submodular Optimization with Applications to Text Classification
Qi Lei
Lingfei Wu
Pin-Yu Chen
A. Dimakis
Inderjit S. Dhillon
Michael Witbrock
AAML
231
94
0
01 Dec 2018
Task-generalizable Adversarial Attack based on Perceptual Metric
Task-generalizable Adversarial Attack based on Perceptual Metric
Muzammal Naseer
Salman H. Khan
Shafin Rahman
Fatih Porikli
AAML
131
53
0
22 Nov 2018
A Geometric Perspective on the Transferability of Adversarial Directions
A Geometric Perspective on the Transferability of Adversarial DirectionsInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2018
Duncan C. McElfresh
H. Bidkhori
Dimitris Papailiopoulos
AAML
98
17
0
08 Nov 2018
Characterizing Audio Adversarial Examples Using Temporal Dependency
Characterizing Audio Adversarial Examples Using Temporal DependencyInternational Conference on Learning Representations (ICLR), 2018
Zhuolin Yang
Yue Liu
Pin-Yu Chen
Basel Alomair
AAML
210
173
0
28 Sep 2018
On The Utility of Conditional Generation Based Mutual Information for
  Characterizing Adversarial Subspaces
On The Utility of Conditional Generation Based Mutual Information for Characterizing Adversarial SubspacesIEEE Global Conference on Signal and Information Processing (GlobalSIP), 2018
Chia-Yi Hsu
Pei-Hsuan Lu
Pin-Yu Chen
Chia-Mu Yu
AAML
192
1
0
24 Sep 2018
Is Ordered Weighted $\ell_1$ Regularized Regression Robust to
  Adversarial Perturbation? A Case Study on OSCAR
Is Ordered Weighted ℓ1\ell_1ℓ1​ Regularized Regression Robust to Adversarial Perturbation? A Case Study on OSCAR
Pin-Yu Chen
B. Vinzamuri
Sijia Liu
AAMLOOD
175
8
0
24 Sep 2018
Adversarial Examples: Opportunities and Challenges
Adversarial Examples: Opportunities and Challenges
Jiliang Zhang
Chen Li
AAML
241
270
0
13 Sep 2018
Structured Adversarial Attack: Towards General Implementation and Better
  Interpretability
Structured Adversarial Attack: Towards General Implementation and Better Interpretability
Kaidi Xu
Sijia Liu
Pu Zhao
Pin-Yu Chen
Huan Zhang
Quanfu Fan
Deniz Erdogmus
Yanzhi Wang
Xinyu Lin
AAML
320
169
0
05 Aug 2018
Robustness May Be at Odds with Accuracy
Robustness May Be at Odds with Accuracy
Dimitris Tsipras
Shibani Santurkar
Logan Engstrom
Alexander Turner
Aleksander Madry
AAML
642
1,884
0
30 May 2018
AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for
  Attacking Black-box Neural Networks
AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks
Chun-Chen Tu
Pai-Shun Ting
Pin-Yu Chen
Sijia Liu
Huan Zhang
Jinfeng Yi
Cho-Jui Hsieh
Shin-Ming Cheng
MLAUAAML
336
433
0
30 May 2018
GenAttack: Practical Black-box Attacks with Gradient-Free Optimization
GenAttack: Practical Black-box Attacks with Gradient-Free Optimization
M. Alzantot
Yash Sharma
Supriyo Chakraborty
Huan Zhang
Cho-Jui Hsieh
Mani B. Srivastava
AAML
322
280
0
28 May 2018
Previous
1234
Page 4 of 4