ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1412.1897
  4. Cited By
Deep Neural Networks are Easily Fooled: High Confidence Predictions for
  Unrecognizable Images
v1v2v3v4 (latest)

Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images

Computer Vision and Pattern Recognition (CVPR), 2014
5 December 2014
Anh Totti Nguyen
J. Yosinski
Jeff Clune
    AAML
ArXiv (abs)PDFHTML

Papers citing "Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images"

50 / 1,455 papers shown
Categorical Foundations of Gradient-Based Learning
Categorical Foundations of Gradient-Based LearningEuropean Symposium on Programming (ESOP), 2021
Geoffrey S. H. Cruttwell
Bruno Gavranović
Neil Ghani
Paul W. Wilson
Fabio Zanasi
FedML
174
83
0
02 Mar 2021
Distribution-Aware Testing of Neural Networks Using Generative Models
Distribution-Aware Testing of Neural Networks Using Generative ModelsInternational Conference on Software Engineering (ICSE), 2021
Swaroopa Dola
Matthew B. Dwyer
M. Soffa
162
62
0
26 Feb 2021
Hierarchical VAEs Know What They Don't Know
Hierarchical VAEs Know What They Don't KnowInternational Conference on Machine Learning (ICML), 2021
Jakob Drachmann Havtorn
J. Frellsen
Søren Hauberg
Lars Maaløe
DRL
619
83
0
16 Feb 2021
Guided Interpolation for Adversarial Training
Guided Interpolation for Adversarial Training
Chen Chen
Jingfeng Zhang
Xilie Xu
Tianlei Hu
Gang Niu
Gang Chen
Masashi Sugiyama
AAML
175
10
0
15 Feb 2021
Defense Against Reward Poisoning Attacks in Reinforcement Learning
Defense Against Reward Poisoning Attacks in Reinforcement Learning
Kiarash Banihashem
Adish Singla
Goran Radanović
AAML
241
29
0
10 Feb 2021
Adversarial Robustness: What fools you makes you stronger
Adversarial Robustness: What fools you makes you stronger
Grzegorz Gluch
R. Urbanke
AAML
220
2
0
10 Feb 2021
Estimation and Applications of Quantiles in Deep Binary Classification
Estimation and Applications of Quantiles in Deep Binary ClassificationIEEE Transactions on Artificial Intelligence (IEEE TAI), 2021
Anuj Tambwekar
Anirudh Maiya
S. Dhavala
Snehanshu Saha
UQCV
117
10
0
09 Feb 2021
Security and Privacy for Artificial Intelligence: Opportunities and
  Challenges
Security and Privacy for Artificial Intelligence: Opportunities and Challenges
Ayodeji Oseni
Nour Moustafa
Helge Janicke
Peng Liu
Z. Tari
A. Vasilakos
AAML
165
64
0
09 Feb 2021
SPADE: A Spectral Method for Black-Box Adversarial Robustness Evaluation
SPADE: A Spectral Method for Black-Box Adversarial Robustness EvaluationInternational Conference on Machine Learning (ICML), 2021
Wuxinlin Cheng
Chenhui Deng
Zhiqiang Zhao
Yaohui Cai
Zhiru Zhang
Zhuo Feng
AAML
305
20
0
07 Feb 2021
Understanding the Interaction of Adversarial Training with Noisy Labels
Understanding the Interaction of Adversarial Training with Noisy Labels
Jianing Zhu
Jingfeng Zhang
Bo Han
Tongliang Liu
Gang Niu
Hongxia Yang
Mohan Kankanhalli
Masashi Sugiyama
AAML
180
30
0
06 Feb 2021
Towards Robust Neural Networks via Close-loop Control
Towards Robust Neural Networks via Close-loop ControlInternational Conference on Learning Representations (ICLR), 2021
Zhuotong Chen
Qianxiao Li
Zheng Zhang
OODAAML
241
29
0
03 Feb 2021
Probabilistic Trust Intervals for Out of Distribution Detection
Probabilistic Trust Intervals for Out of Distribution Detection
Gagandeep Singh
Deepak Mishra
UQCVAAMLOOD
99
0
0
02 Feb 2021
Comparing hundreds of machine learning classifiers and discrete choice models in predicting travel behavior: an empirical benchmark
Comparing hundreds of machine learning classifiers and discrete choice models in predicting travel behavior: an empirical benchmarkTransportation Research Part B: Methodological (TRPBM), 2021
Shenhao Wang
Baichuan Mo
Stephane Hess
Jinhuan Zhao
Jinhua Zhao
339
26
0
01 Feb 2021
ShufText: A Simple Black Box Approach to Evaluate the Fragility of Text
  Classification Models
ShufText: A Simple Black Box Approach to Evaluate the Fragility of Text Classification ModelsInternational Conference on Machine Learning, Optimization, and Data Science (MOD), 2021
Rutuja Taware
Shraddha Varat
G. Salunke
Chaitanya Gawande
Geetanjali Kale
Rahul Khengare
Raviraj Joshi
114
7
0
30 Jan 2021
The Mind's Eye: Visualizing Class-Agnostic Features of CNNs
The Mind's Eye: Visualizing Class-Agnostic Features of CNNsInternational Conference on Information Photonics (ICIP), 2021
Alexandros Stergiou
FAtt
127
4
0
29 Jan 2021
A Convolutional Neural Network based Cascade Reconstruction for the
  IceCube Neutrino Observatory
A Convolutional Neural Network based Cascade Reconstruction for the IceCube Neutrino Observatory
R. Abbasi
M. Ackermann
J. Adams
J. Aguilar
M. Ahlers
...
Zifei Shan
J. Yáñez
S. Yoshida
T. Yuan
Zheng Zhang
143
56
0
27 Jan 2021
Defenses Against Multi-Sticker Physical Domain Attacks on Classifiers
Defenses Against Multi-Sticker Physical Domain Attacks on Classifiers
Xinwei Zhao
Matthew C. Stamm
AAML
97
4
0
26 Jan 2021
A Transferable Anti-Forensic Attack on Forensic CNNs Using A Generative
  Adversarial Network
A Transferable Anti-Forensic Attack on Forensic CNNs Using A Generative Adversarial Network
Xinwei Zhao
Chen Chen
Matthew C. Stamm
GANAAML
132
4
0
23 Jan 2021
Copycat CNN: Are Random Non-Labeled Data Enough to Steal Knowledge from
  Black-box Models?
Copycat CNN: Are Random Non-Labeled Data Enough to Steal Knowledge from Black-box Models?Pattern Recognition (Pattern Recogn.), 2021
Jacson Rodrigues Correia-Silva
Rodrigo Berriel
C. Badue
Alberto F. de Souza
Thiago Oliveira-Santos
MLAU
184
16
0
21 Jan 2021
Understanding in Artificial Intelligence
Understanding in Artificial Intelligence
S. Maetschke
D. M. Iraola
Pieter Barnard
Elaheh Shafieibavani
Peter Zhong
Ying Xu
Antonio Jimeno Yepes
ELMVLM
188
0
0
17 Jan 2021
Malicious Code Detection: Run Trace Output Analysis by LSTM
Malicious Code Detection: Run Trace Output Analysis by LSTMIEEE Access (IEEE Access), 2021
Cengiz Acarturk
Melih Sirlanci
Pinar Gurkan Balikcioglu
Deniz Demirci
Nazenin Sahin
Ozge A. Kucuk
126
13
0
14 Jan 2021
Analysis of skin lesion images with deep learning
Analysis of skin lesion images with deep learning
Josef Steppan
S. Hanke
183
16
0
11 Jan 2021
SyReNN: A Tool for Analyzing Deep Neural Networks
SyReNN: A Tool for Analyzing Deep Neural NetworksInternational Journal on Software Tools for Technology Transfer (STTT) (STTT), 2021
Matthew Sotoudeh
Aditya V. Thakur
AAMLGNN
140
17
0
09 Jan 2021
Approaching Neural Network Uncertainty Realism
Approaching Neural Network Uncertainty Realism
Joachim Sicking
Alexander Kister
Matthias Fahrland
S. Eickeler
Fabian Hüger
S. Rüping
Peter Schlicht
Tim Wirtz
145
2
0
08 Jan 2021
Corner case data description and detection
Corner case data description and detectionWorkshop on AI Engineering - Software Engineering for AI (ESEA), 2021
Tinghui Ouyang
Vicent Sant Marco
Yoshinao Isobe
H. Asoh
Y. Oiwa
Yoshiki Seo
AAML
178
15
0
07 Jan 2021
Uncertainty-sensitive Activity Recognition: a Reliability Benchmark and
  the CARING Models
Uncertainty-sensitive Activity Recognition: a Reliability Benchmark and the CARING ModelsInternational Conference on Pattern Recognition (ICPR), 2021
Alina Roitberg
Monica Haurilet
Manuel Martínez
Rainer Stiefelhagen
UQCV
106
7
0
02 Jan 2021
Out of Order: How Important Is The Sequential Order of Words in a
  Sentence in Natural Language Understanding Tasks?
Out of Order: How Important Is The Sequential Order of Words in a Sentence in Natural Language Understanding Tasks?Findings (Findings), 2020
Thang M. Pham
Trung Bui
Long Mai
Anh Totti Nguyen
539
126
0
30 Dec 2020
A Survey on Neural Network Interpretability
A Survey on Neural Network InterpretabilityIEEE Transactions on Emerging Topics in Computational Intelligence (IEEE TETCI), 2020
Yu Zhang
Peter Tiño
A. Leonardis
Shengcai Liu
FaMLXAI
579
828
0
28 Dec 2020
Robustness, Privacy, and Generalization of Adversarial Training
Robustness, Privacy, and Generalization of Adversarial Training
Fengxiang He
Shaopeng Fu
Bohan Wang
Dacheng Tao
275
12
0
25 Dec 2020
On the Granularity of Explanations in Model Agnostic NLP
  Interpretability
On the Granularity of Explanations in Model Agnostic NLP Interpretability
Yves Rychener
X. Renard
Djamé Seddah
P. Frossard
Marcin Detyniecki
MILMFAtt
295
4
0
24 Dec 2020
Unveiling Real-Life Effects of Online Photo Sharing
Unveiling Real-Life Effects of Online Photo SharingIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2020
V. Nguyen
Adrian Daniel Popescu
Jérôme Deshayes-Chossart
125
4
0
24 Dec 2020
Evolving the Behavior of Machines: From Micro to Macroevolution
Evolving the Behavior of Machines: From Micro to MacroevolutioniScience (iScience), 2020
Jean-Baptiste Mouret
AI4CE
151
15
0
21 Dec 2020
Blurring Fools the Network -- Adversarial Attacks by Feature Peak
  Suppression and Gaussian Blurring
Blurring Fools the Network -- Adversarial Attacks by Feature Peak Suppression and Gaussian Blurring
Chenchen Zhao
Hao Li
AAML
167
3
0
21 Dec 2020
Exploiting Vulnerability of Pooling in Convolutional Neural Networks by
  Strict Layer-Output Manipulation for Adversarial Attacks
Exploiting Vulnerability of Pooling in Convolutional Neural Networks by Strict Layer-Output Manipulation for Adversarial Attacks
Chenchen Zhao
Hao Li
AAML
88
0
0
21 Dec 2020
Out-distribution aware Self-training in an Open World Setting
Out-distribution aware Self-training in an Open World Setting
Maximilian Augustin
Matthias Hein
90
7
0
21 Dec 2020
Recent advances in deep learning theory
Recent advances in deep learning theory
Fengxiang He
Dacheng Tao
AI4CE
336
57
0
20 Dec 2020
Deep Open Intent Classification with Adaptive Decision Boundary
Deep Open Intent Classification with Adaptive Decision BoundaryAAAI Conference on Artificial Intelligence (AAAI), 2020
Hanlei Zhang
Hua Xu
Ting-En Lin
VLM
450
119
0
18 Dec 2020
Semantics and explanation: why counterfactual explanations produce
  adversarial examples in deep neural networks
Semantics and explanation: why counterfactual explanations produce adversarial examples in deep neural networks
Kieran Browne
Ben Swift
AAMLGAN
121
32
0
18 Dec 2020
Learning Prediction Intervals for Model Performance
Learning Prediction Intervals for Model PerformanceAAAI Conference on Artificial Intelligence (AAAI), 2020
Benjamin Elder
Matthew Arnold
Anupama Murthi
Jirí Navrátil
149
12
0
15 Dec 2020
Exploring Vicinal Risk Minimization for Lightweight Out-of-Distribution
  Detection
Exploring Vicinal Risk Minimization for Lightweight Out-of-Distribution Detection
Deepak Ravikumar
Sangamesh Kodge
Isha Garg
Kaushik Roy
OODD
163
5
0
15 Dec 2020
FAWA: Fast Adversarial Watermark Attack on Optical Character Recognition
  (OCR) Systems
FAWA: Fast Adversarial Watermark Attack on Optical Character Recognition (OCR) Systems
Lu Chen
Jiao Sun
Wenyuan Xu
AAML
97
19
0
15 Dec 2020
Demystifying Deep Neural Networks Through Interpretation: A Survey
Demystifying Deep Neural Networks Through Interpretation: A Survey
Giang Dao
Minwoo Lee
FaMLFAtt
248
1
0
13 Dec 2020
Dependency Decomposition and a Reject Option for Explainable Models
Dependency Decomposition and a Reject Option for Explainable Models
Jan Kronenberger
Anselm Haselhoff
FAttAAML
260
8
0
11 Dec 2020
Confidence Estimation via Auxiliary Models
Confidence Estimation via Auxiliary ModelsIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020
Charles Corbière
Nicolas Thome
A. Saporta
Tuan-Hung Vu
Matthieu Cord
P. Pérez
TPM
301
60
0
11 Dec 2020
An Empirical Review of Adversarial Defenses
An Empirical Review of Adversarial Defenses
Ayush Goel
AAML
52
0
0
10 Dec 2020
Risk Management Framework for Machine Learning Security
Risk Management Framework for Machine Learning Security
J. Breier
A. Baldwin
H. Balinsky
Yang Liu
AAML
111
3
0
09 Dec 2020
EvaLDA: Efficient Evasion Attacks Towards Latent Dirichlet Allocation
EvaLDA: Efficient Evasion Attacks Towards Latent Dirichlet Allocation
Qi Zhou
Haipeng Chen
Yitao Zheng
Zhen Wang
AAML
111
5
0
09 Dec 2020
Quality-Diversity Optimization: a novel branch of stochastic
  optimization
Quality-Diversity Optimization: a novel branch of stochastic optimization
Konstantinos Chatzilygeroudis
Antoine Cully
Vassilis Vassiliades
Jean-Baptiste Mouret
317
112
0
08 Dec 2020
Are DNNs fooled by extremely unrecognizable images?
Are DNNs fooled by extremely unrecognizable images?
Soichiro Kumano
Hiroshi Kera
T. Yamasaki
AAML
246
3
0
07 Dec 2020
From a Fourier-Domain Perspective on Adversarial Examples to a Wiener
  Filter Defense for Semantic Segmentation
From a Fourier-Domain Perspective on Adversarial Examples to a Wiener Filter Defense for Semantic SegmentationIEEE International Joint Conference on Neural Network (IJCNN), 2020
Nikhil Kapoor
Andreas Bär
Serin Varghese
Jan David Schneider
Fabian Hüger
Peter Schlicht
Tim Fingscheidt
AAML
155
10
0
02 Dec 2020
Previous
123...131415...282930
Next