ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1412.1897
  4. Cited By
Deep Neural Networks are Easily Fooled: High Confidence Predictions for
  Unrecognizable Images
v1v2v3v4 (latest)

Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images

Computer Vision and Pattern Recognition (CVPR), 2014
5 December 2014
Anh Totti Nguyen
J. Yosinski
Jeff Clune
    AAML
ArXiv (abs)PDFHTML

Papers citing "Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images"

50 / 1,455 papers shown
Controversial stimuli: pitting neural networks against each other as
  models of human recognition
Controversial stimuli: pitting neural networks against each other as models of human recognition
Tal Golan
Prashant C. Raju
N. Kriegeskorte
AAML
225
39
0
21 Nov 2019
The Origins and Prevalence of Texture Bias in Convolutional Neural
  Networks
The Origins and Prevalence of Texture Bias in Convolutional Neural Networks
Katherine L. Hermann
Ting Chen
Simon Kornblith
CVBM
369
21
0
20 Nov 2019
Robust Deep Neural Networks Inspired by Fuzzy Logic
Robust Deep Neural Networks Inspired by Fuzzy Logic
Minh Le
OODAAMLAI4CE
366
0
0
20 Nov 2019
Coverage Testing of Deep Learning Models using Dataset Characterization
Coverage Testing of Deep Learning Models using Dataset Characterization
Senthil Mani
A. Sankaran
Srikanth G. Tamilselvam
Akshay Sethi
AAML
80
21
0
17 Nov 2019
CAGFuzz: Coverage-Guided Adversarial Generative Fuzzing Testing of Deep
  Learning Systems
CAGFuzz: Coverage-Guided Adversarial Generative Fuzzing Testing of Deep Learning Systems
Pengcheng Zhang
Qiyin Dai
Patrizio Pelliccione
AAML
184
4
0
14 Nov 2019
Adversarial Margin Maximization Networks
Adversarial Margin Maximization NetworksIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2019
Ziang Yan
Yiwen Guo
Changshui Zhang
AAML
89
13
0
14 Nov 2019
What Do Compressed Deep Neural Networks Forget?
What Do Compressed Deep Neural Networks Forget?
Sara Hooker
Aaron Courville
Gregory Clark
Yann N. Dauphin
Andrea Frome
311
203
0
13 Nov 2019
Adversarial Music: Real World Audio Adversary Against Wake-word
  Detection System
Adversarial Music: Real World Audio Adversary Against Wake-word Detection SystemNeural Information Processing Systems (NeurIPS), 2019
Juncheng Billy Li
Shuhui Qu
Xinjian Li
Joseph Szurley
J. Zico Kolter
Florian Metze
AAML
326
71
0
31 Oct 2019
Are Out-of-Distribution Detection Methods Effective on Large-Scale
  Datasets?
Are Out-of-Distribution Detection Methods Effective on Large-Scale Datasets?
Ryne Roady
Tyler L. Hayes
Ronald Kemker
Ayesha Gonzales
Christopher Kanan
OODD
143
20
0
30 Oct 2019
Towards calibrated and scalable uncertainty representations for neural
  networks
Towards calibrated and scalable uncertainty representations for neural networks
Nabeel Seedat
Christopher Kanan
UQCV
246
20
0
28 Oct 2019
Neurlux: Dynamic Malware Analysis Without Feature Engineering
Neurlux: Dynamic Malware Analysis Without Feature EngineeringAsia-Pacific Computer Systems Architecture Conference (APCSAC), 2019
Chani Jindal
Christopher Salls
H. Aghakhani
Keith Long
Christopher Kruegel
Giovanni Vigna
244
70
0
24 Oct 2019
Detecting Out-of-Distribution Inputs in Deep Neural Networks Using an
  Early-Layer Output
Detecting Out-of-Distribution Inputs in Deep Neural Networks Using an Early-Layer Output
Vahdat Abdelzad
Krzysztof Czarnecki
Rick Salay
Taylor Denouden
Sachin Vernekar
Buu Phan
OODD
213
48
0
23 Oct 2019
Attacking Optical Flow
Attacking Optical FlowIEEE International Conference on Computer Vision (ICCV), 2019
Anurag Ranjan
J. Janai
Andreas Geiger
Michael J. Black
AAML3DPC
181
91
0
22 Oct 2019
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AIInformation Fusion (Inf. Fusion), 2019
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
937
7,578
0
22 Oct 2019
Unsupervised Out-of-Distribution Detection with Batch Normalization
Unsupervised Out-of-Distribution Detection with Batch Normalization
Jiaming Song
Yang Song
Stefano Ermon
OODD
114
23
0
21 Oct 2019
Are Perceptually-Aligned Gradients a General Property of Robust
  Classifiers?
Are Perceptually-Aligned Gradients a General Property of Robust Classifiers?
Simran Kaur
Jeremy M. Cohen
Zachary Chase Lipton
OODAAML
225
68
0
18 Oct 2019
KerCNNs: biologically inspired lateral connections for classification of
  corrupted images
KerCNNs: biologically inspired lateral connections for classification of corrupted images
Noemi Montobbio
L. Bonnasse-Gahot
G. Citti
A. Sarti
123
10
0
18 Oct 2019
Adversarial Examples for Models of Code
Adversarial Examples for Models of Code
Noam Yefet
Uri Alon
Eran Yahav
SILMAAMLMLAU
380
186
0
15 Oct 2019
Hear "No Evil", See "Kenansville": Efficient and Transferable Black-Box
  Attacks on Speech Recognition and Voice Identification Systems
Hear "No Evil", See "Kenansville": Efficient and Transferable Black-Box Attacks on Speech Recognition and Voice Identification Systems
H. Abdullah
Muhammad Sajidur Rahman
Washington Garcia
Logan Blue
Kevin Warren
Anurag Swarnim Yadav
T. Shrimpton
Patrick Traynor
AAML
142
96
0
11 Oct 2019
Out-of-distribution Detection in Classifiers via Generation
Out-of-distribution Detection in Classifiers via Generation
Sachin Vernekar
Ashish Gaurav
Vahdat Abdelzad
Taylor Denouden
Rick Salay
Krzysztof Czarnecki
OODD
269
85
0
09 Oct 2019
Continual Learning in Neural Networks
Continual Learning in Neural Networks
Rahaf Aljundi
CLL
200
42
0
07 Oct 2019
Testing and verification of neural-network-based safety-critical control
  software: A systematic literature review
Testing and verification of neural-network-based safety-critical control software: A systematic literature reviewInformation and Software Technology (IST), 2019
Jin Zhang
Jingyue Li
224
57
0
05 Oct 2019
Requirements for Developing Robust Neural Networks
Requirements for Developing Robust Neural Networks
Rulin Shao
Michael Lee
VLM
128
1
0
04 Oct 2019
Perturbations are not Enough: Generating Adversarial Examples with
  Spatial Distortions
Perturbations are not Enough: Generating Adversarial Examples with Spatial Distortions
He Zhao
Trung Le
Paul Montague
O. Vel
Tamas Abraham
Dinh Q. Phung
AAML
137
8
0
03 Oct 2019
Addressing Failure Prediction by Learning Model Confidence
Addressing Failure Prediction by Learning Model ConfidenceNeural Information Processing Systems (NeurIPS), 2019
Charles Corbière
Nicolas Thome
Avner Bar-Hen
Matthieu Cord
P. Pérez
302
331
0
01 Oct 2019
Re-learning of Child Model for Misclassified data by using KL Divergence
  in AffectNet: A Database for Facial Expression
Re-learning of Child Model for Misclassified data by using KL Divergence in AffectNet: A Database for Facial ExpressionInternational Workshop on Computational Intelligence and Applications (CIA), 2019
T. Ichimura
Shin Kamada
CVBM
37
2
0
30 Sep 2019
Sampling the "Inverse Set" of a Neuron: An Approach to Understanding
  Neural Nets
Sampling the "Inverse Set" of a Neuron: An Approach to Understanding Neural NetsInternational Conference on Information Photonics (ICIP), 2019
Suryabhan Singh Hada
M. A. Carreira-Perpiñán
BDL
114
8
0
27 Sep 2019
Towards neural networks that provably know when they don't know
Towards neural networks that provably know when they don't knowInternational Conference on Learning Representations (ICLR), 2019
Alexander Meinke
Matthias Hein
OODD
286
147
0
26 Sep 2019
Wider Networks Learn Better Features
Wider Networks Learn Better Features
D. Gilboa
Guy Gur-Ari
91
7
0
25 Sep 2019
Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks
Mixup Inference: Better Exploiting Mixup to Defend Adversarial AttacksInternational Conference on Learning Representations (ICLR), 2019
Tianyu Pang
Kun Xu
Jun Zhu
AAML
211
111
0
25 Sep 2019
Input complexity and out-of-distribution detection with likelihood-based
  generative models
Input complexity and out-of-distribution detection with likelihood-based generative modelsInternational Conference on Learning Representations (ICLR), 2019
Joan Serrà
David Álvarez
Vicencc Gómez
Olga Slizovskaia
José F. Núñez
Jordi Luque
OODD
413
292
0
25 Sep 2019
Switched linear projections for neural network interpretability
Switched linear projections for neural network interpretability
Lech Szymanski
B. McCane
C. Atkinson
FAttMILMLLMSV
104
1
0
25 Sep 2019
Intelligent image synthesis to attack a segmentation CNN using
  adversarial learning
Intelligent image synthesis to attack a segmentation CNN using adversarial learning
Liang Chen
P. Bentley
K. Mori
K. Misawa
M. Fujiwara
Daniel Rueckert
GANAAMLMedIm
114
20
0
24 Sep 2019
Propagated Perturbation of Adversarial Attack for well-known CNNs:
  Empirical Study and its Explanation
Propagated Perturbation of Adversarial Attack for well-known CNNs: Empirical Study and its Explanation
Jihyeun Yoon
Kyungyul Kim
Jongseong Jang
AAML
171
5
0
19 Sep 2019
Wasserstein Diffusion Tikhonov Regularization
Wasserstein Diffusion Tikhonov Regularization
A. Lin
Yonatan Dukler
Wuchen Li
Guido Montúfar
123
2
0
15 Sep 2019
Generating Accurate Pseudo-labels in Semi-Supervised Learning and
  Avoiding Overconfident Predictions via Hermite Polynomial Activations
Generating Accurate Pseudo-labels in Semi-Supervised Learning and Avoiding Overconfident Predictions via Hermite Polynomial ActivationsComputer Vision and Pattern Recognition (CVPR), 2019
Vishnu Suresh Lokhande
Songwong Tasneeyapant
Abhay Venkatesh
Sathya Ravi
Vikas Singh
150
30
0
12 Sep 2019
Identifying and Resisting Adversarial Videos Using Temporal Consistency
Identifying and Resisting Adversarial Videos Using Temporal Consistency
Yang Liu
Xingxing Wei
Xiaochun Cao
AAML
151
17
0
11 Sep 2019
A Survey of Techniques All Classifiers Can Learn from Deep Networks:
  Models, Optimizations, and Regularization
A Survey of Techniques All Classifiers Can Learn from Deep Networks: Models, Optimizations, and Regularization
Alireza Ghods
D. Cook
155
1
0
10 Sep 2019
Robust Full-FoV Depth Estimation in Tele-wide Camera System
Robust Full-FoV Depth Estimation in Tele-wide Camera SystemIEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2019
Kai Guo
Seongwook Song
Soonkeun Chang
Tae-ui Kim
S. Han
Irina Kim
MDE
93
1
0
08 Sep 2019
Learning to Discriminate Perturbations for Blocking Adversarial Attacks
  in Text Classification
Learning to Discriminate Perturbations for Blocking Adversarial Attacks in Text ClassificationConference on Empirical Methods in Natural Language Processing (EMNLP), 2019
Yichao Zhou
Jyun-Yu Jiang
Kai-Wei Chang
Wei Wang
AAML
140
132
0
06 Sep 2019
Metric Learning for Adversarial Robustness
Metric Learning for Adversarial RobustnessNeural Information Processing Systems (NeurIPS), 2019
Chengzhi Mao
Ziyuan Zhong
Junfeng Yang
Carl Vondrick
Baishakhi Ray
OOD
330
201
0
03 Sep 2019
Universal, transferable and targeted adversarial attacks
Universal, transferable and targeted adversarial attacks
Junde Wu
Rao Fu
AAMLSILM
152
10
0
29 Aug 2019
Bayesian Nonparametrics for Non-exhaustive Learning
Bayesian Nonparametrics for Non-exhaustive Learning
Yicheng Cheng
Bartek Rajwa
M. M. Dundar
45
0
0
26 Aug 2019
A Statistical Defense Approach for Detecting Adversarial Examples
A Statistical Defense Approach for Detecting Adversarial ExamplesPattern Recognition in Information Systems (PRIS), 2019
Alessandro Cennamo
Ido Freeman
A. Kummert
AAML
95
5
0
26 Aug 2019
TEST: an End-to-End Network Traffic Examination and Identification
  Framework Based on Spatio-Temporal Features Extraction
TEST: an End-to-End Network Traffic Examination and Identification Framework Based on Spatio-Temporal Features Extraction
Yi Zeng
Zihao Qi
Wencheng Chen
Yanzhe Huang
Xingxin Zheng
Han Qiu
65
6
0
26 Aug 2019
Analyzing Cyber-Physical Systems from the Perspective of Artificial
  Intelligence
Analyzing Cyber-Physical Systems from the Perspective of Artificial Intelligence
Eric M. S. P. Veith
Lars Fischer
Martin Tröschel
Astrid Nieße
111
20
0
21 Aug 2019
Density estimation in representation space to predict model uncertainty
Density estimation in representation space to predict model uncertaintyCommunications in Computer and Information Science (CCIS), 2019
Tiago Ramalho
M. Corbalan
UQCVBDL
177
44
0
20 Aug 2019
Neural Architecture Search by Estimation of Network Structure
  Distributions
Neural Architecture Search by Estimation of Network Structure Distributions
A. Muravev
Jenni Raitoharju
Moncef Gabbouj
OOD
195
1
0
19 Aug 2019
On the Robustness of Human Pose Estimation
On the Robustness of Human Pose Estimation
Sahil Shah
Naman Jain
Abhishek Sharma
Arjun Jain
AAMLOOD
244
23
0
18 Aug 2019
EigenRank by Committee: A Data Subset Selection and Failure Prediction
  paradigm for Robust Deep Learning based Medical Image Segmentation
EigenRank by Committee: A Data Subset Selection and Failure Prediction paradigm for Robust Deep Learning based Medical Image Segmentation
Bilwaj Gaonkar
Joel Beckett
Mark Attiah
Christine Ahn
Matthew Edwards
...
Azim Laiwalla
Banafsheh Salehi
Bryan Yoo
Alex A. T. Bui
Luke Macyszyn
121
0
0
17 Aug 2019
Previous
123...181920...282930
Next
Page 19 of 30
Pageof 30