ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1807.01697
  4. Cited By
Benchmarking Neural Network Robustness to Common Corruptions and Surface
  Variations

Benchmarking Neural Network Robustness to Common Corruptions and Surface Variations

4 July 2018
Dan Hendrycks
Thomas G. Dietterich
    OOD
ArXivPDFHTML

Papers citing "Benchmarking Neural Network Robustness to Common Corruptions and Surface Variations"

36 / 36 papers shown
Title
Deep Anomaly Detection with Outlier Exposure
Deep Anomaly Detection with Outlier Exposure
Dan Hendrycks
Mantas Mazeika
Thomas G. Dietterich
OODD
114
1,468
0
11 Dec 2018
Open Category Detection with PAC Guarantees
Open Category Detection with PAC Guarantees
Si Liu
Risheek Garrepalli
Thomas G. Dietterich
Alan Fern
Dan Hendrycks
35
84
0
01 Aug 2018
Do CIFAR-10 Classifiers Generalize to CIFAR-10?
Do CIFAR-10 Classifiers Generalize to CIFAR-10?
Benjamin Recht
Rebecca Roelofs
Ludwig Schmidt
Vaishaal Shankar
OOD
FedML
ELM
113
409
0
01 Jun 2018
Why do deep convolutional networks generalize so poorly to small image
  transformations?
Why do deep convolutional networks generalize so poorly to small image transformations?
Aharon Azulay
Yair Weiss
57
559
0
30 May 2018
Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe
  Noise
Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise
Dan Hendrycks
Mantas Mazeika
Duncan Wilson
Kevin Gimpel
NoLa
122
553
0
14 Feb 2018
CondenseNet: An Efficient DenseNet using Learned Group Convolutions
CondenseNet: An Efficient DenseNet using Learned Group Convolutions
Gao Huang
Shichen Liu
Laurens van der Maaten
Kilian Q. Weinberger
76
796
0
25 Nov 2017
Standard detectors aren't (currently) fooled by physical adversarial
  stop signs
Standard detectors aren't (currently) fooled by physical adversarial stop signs
Jiajun Lu
Hussein Sibai
Evan Fabry
David A. Forsyth
AAML
37
59
0
09 Oct 2017
Provably Minimally-Distorted Adversarial Examples
Provably Minimally-Distorted Adversarial Examples
Nicholas Carlini
Guy Katz
Clark W. Barrett
D. Dill
AAML
56
89
0
29 Sep 2017
Towards Proving the Adversarial Robustness of Deep Neural Networks
Towards Proving the Adversarial Robustness of Deep Neural Networks
Guy Katz
Clark W. Barrett
D. Dill
Kyle D. Julian
Mykel J. Kochenderfer
AAML
OOD
39
118
0
08 Sep 2017
Robust Physical-World Attacks on Deep Learning Models
Robust Physical-World Attacks on Deep Learning Models
Kevin Eykholt
Ivan Evtimov
Earlence Fernandes
Yue Liu
Amir Rahmati
Chaowei Xiao
Atul Prakash
Tadayoshi Kohno
D. Song
AAML
45
594
0
27 Jul 2017
Foolbox: A Python toolbox to benchmark the robustness of machine
  learning models
Foolbox: A Python toolbox to benchmark the robustness of machine learning models
Jonas Rauber
Wieland Brendel
Matthias Bethge
AAML
52
283
0
13 Jul 2017
Comparing deep neural networks against humans: object recognition when
  the signal gets weaker
Comparing deep neural networks against humans: object recognition when the signal gets weaker
Robert Geirhos
David H. J. Janssen
Heiko H. Schutt
Jonas Rauber
Matthias Bethge
Felix Wichmann
62
244
0
21 Jun 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
231
11,962
0
19 Jun 2017
Certified Defenses for Data Poisoning Attacks
Certified Defenses for Data Poisoning Attacks
Jacob Steinhardt
Pang Wei Koh
Percy Liang
AAML
73
751
0
09 Jun 2017
Shake-Shake regularization
Shake-Shake regularization
Xavier Gastaldi
3DPC
BDL
OOD
60
380
0
21 May 2017
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection
  Methods
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Nicholas Carlini
D. Wagner
AAML
103
1,851
0
20 May 2017
A Study and Comparison of Human and Deep Learning Recognition
  Performance Under Visual Distortions
A Study and Comparison of Human and Deep Learning Recognition Performance Under Visual Distortions
Samuel F. Dodge
Lina Karam
3DH
48
421
0
06 May 2017
Google's Cloud Vision API Is Not Robust To Noise
Google's Cloud Vision API Is Not Robust To Noise
Hossein Hosseini
Baicen Xiao
Radha Poovendran
AAML
52
123
0
16 Apr 2017
Quality Resilient Deep Neural Networks
Quality Resilient Deep Neural Networks
Samuel F. Dodge
Lina Karam
OOD
30
46
0
23 Mar 2017
Arbitrary Style Transfer in Real-time with Adaptive Instance
  Normalization
Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization
Xun Huang
Serge J. Belongie
OOD
152
4,331
0
20 Mar 2017
On Detecting Adversarial Perturbations
On Detecting Adversarial Perturbations
J. H. Metzen
Tim Genewein
Volker Fischer
Bastian Bischoff
AAML
49
947
0
14 Feb 2017
Examining the Impact of Blur on Recognition by Convolutional Networks
Examining the Impact of Blur on Recognition by Convolutional Networks
Igor Vasiljevic
Ayan Chakrabarti
Gregory Shakhnarovich
51
197
0
17 Nov 2016
Aggregated Residual Transformations for Deep Neural Networks
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Zhuowen Tu
Kaiming He
435
10,281
0
16 Nov 2016
Adversarial Machine Learning at Scale
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
450
3,124
0
04 Nov 2016
A Baseline for Detecting Misclassified and Out-of-Distribution Examples
  in Neural Networks
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
Dan Hendrycks
Kevin Gimpel
UQCV
103
3,420
0
07 Oct 2016
Densely Connected Convolutional Networks
Densely Connected Convolutional Networks
Gao Huang
Zhuang Liu
Laurens van der Maaten
Kilian Q. Weinberger
PINN
3DV
631
36,599
0
25 Aug 2016
Towards Evaluating the Robustness of Neural Networks
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OOD
AAML
170
8,513
0
16 Aug 2016
SGDR: Stochastic Gradient Descent with Warm Restarts
SGDR: Stochastic Gradient Descent with Warm Restarts
I. Loshchilov
Frank Hutter
ODL
231
8,030
0
13 Aug 2016
Early Methods for Detecting Adversarial Images
Early Methods for Detecting Adversarial Images
Dan Hendrycks
Kevin Gimpel
AAML
56
236
0
01 Aug 2016
Defensive Distillation is Not Robust to Adversarial Examples
Defensive Distillation is Not Robust to Adversarial Examples
Nicholas Carlini
D. Wagner
35
339
0
14 Jul 2016
Measuring Neural Net Robustness with Constraints
Measuring Neural Net Robustness with Constraints
Osbert Bastani
Yani Andrew Ioannou
Leonidas Lampropoulos
Dimitrios Vytiniotis
A. Nori
A. Criminisi
AAML
50
423
0
24 May 2016
Improving the Robustness of Deep Neural Networks via Stability Training
Improving the Robustness of Deep Neural Networks via Stability Training
Stephan Zheng
Yang Song
Thomas Leung
Ian Goodfellow
OOD
30
637
0
15 Apr 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
1.4K
192,638
0
10 Dec 2015
Distillation as a Defense to Adversarial Perturbations against Deep
  Neural Networks
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
Nicolas Papernot
Patrick McDaniel
Xi Wu
S. Jha
A. Swami
AAML
48
3,061
0
14 Nov 2015
Batch Normalization: Accelerating Deep Network Training by Reducing
  Internal Covariate Shift
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe
Christian Szegedy
OOD
328
43,154
0
11 Feb 2015
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
185
14,831
1
21 Dec 2013
1