ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1412.1897
  4. Cited By
Deep Neural Networks are Easily Fooled: High Confidence Predictions for
  Unrecognizable Images

Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images

5 December 2014
Anh Totti Nguyen
J. Yosinski
Jeff Clune
    AAML
ArXivPDFHTML

Papers citing "Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images"

50 / 1,401 papers shown
Title
Distribution-Aware Testing of Neural Networks Using Generative Models
Distribution-Aware Testing of Neural Networks Using Generative Models
Swaroopa Dola
Matthew B. Dwyer
M. Soffa
34
52
0
26 Feb 2021
Hierarchical VAEs Know What They Don't Know
Hierarchical VAEs Know What They Don't Know
Jakob Drachmann Havtorn
J. Frellsen
Søren Hauberg
Lars Maaløe
DRL
37
72
0
16 Feb 2021
Guided Interpolation for Adversarial Training
Guided Interpolation for Adversarial Training
Chen Chen
Jingfeng Zhang
Xilie Xu
Tianlei Hu
Gang Niu
Gang Chen
Masashi Sugiyama
AAML
42
10
0
15 Feb 2021
Defense Against Reward Poisoning Attacks in Reinforcement Learning
Defense Against Reward Poisoning Attacks in Reinforcement Learning
Kiarash Banihashem
Adish Singla
Goran Radanović
AAML
50
26
0
10 Feb 2021
Adversarial Robustness: What fools you makes you stronger
Adversarial Robustness: What fools you makes you stronger
Grzegorz Gluch
R. Urbanke
AAML
95
2
0
10 Feb 2021
Estimation and Applications of Quantiles in Deep Binary Classification
Estimation and Applications of Quantiles in Deep Binary Classification
Anuj Tambwekar
Anirudh Maiya
S. Dhavala
Snehanshu Saha
UQCV
16
7
0
09 Feb 2021
Security and Privacy for Artificial Intelligence: Opportunities and
  Challenges
Security and Privacy for Artificial Intelligence: Opportunities and Challenges
Ayodeji Oseni
Nour Moustafa
Helge Janicke
Peng Liu
Z. Tari
A. Vasilakos
AAML
34
49
0
09 Feb 2021
SPADE: A Spectral Method for Black-Box Adversarial Robustness Evaluation
SPADE: A Spectral Method for Black-Box Adversarial Robustness Evaluation
Wuxinlin Cheng
Chenhui Deng
Zhiqiang Zhao
Yaohui Cai
Zhiru Zhang
Zhuo Feng
AAML
27
13
0
07 Feb 2021
Understanding the Interaction of Adversarial Training with Noisy Labels
Understanding the Interaction of Adversarial Training with Noisy Labels
Jianing Zhu
Jingfeng Zhang
Bo Han
Tongliang Liu
Gang Niu
Hongxia Yang
Mohan Kankanhalli
Masashi Sugiyama
AAML
39
27
0
06 Feb 2021
Towards Robust Neural Networks via Close-loop Control
Towards Robust Neural Networks via Close-loop Control
Zhuotong Chen
Qianxiao Li
Zheng Zhang
OOD
AAML
35
25
0
03 Feb 2021
Probabilistic Trust Intervals for Out of Distribution Detection
Probabilistic Trust Intervals for Out of Distribution Detection
Gagandeep Singh
Deepak Mishra
UQCV
AAML
OOD
19
0
0
02 Feb 2021
Comparing hundreds of machine learning classifiers and discrete choice models in predicting travel behavior: an empirical benchmark
Comparing hundreds of machine learning classifiers and discrete choice models in predicting travel behavior: an empirical benchmark
Shenhao Wang
Baichuan Mo
Stephane Hess
Jinhuan Zhao
Jinhua Zhao
51
2
0
01 Feb 2021
ShufText: A Simple Black Box Approach to Evaluate the Fragility of Text
  Classification Models
ShufText: A Simple Black Box Approach to Evaluate the Fragility of Text Classification Models
Rutuja Taware
Shraddha Varat
G. Salunke
Chaitanya Gawande
Geetanjali Kale
Rahul Khengare
Raviraj Joshi
25
5
0
30 Jan 2021
The Mind's Eye: Visualizing Class-Agnostic Features of CNNs
The Mind's Eye: Visualizing Class-Agnostic Features of CNNs
Alexandros Stergiou
FAtt
19
3
0
29 Jan 2021
A Convolutional Neural Network based Cascade Reconstruction for the
  IceCube Neutrino Observatory
A Convolutional Neural Network based Cascade Reconstruction for the IceCube Neutrino Observatory
R. Abbasi
M. Ackermann
J. Adams
J. Aguilar
M. Ahlers
...
Zifei Shan
J. Yáñez
S. Yoshida
T. Yuan
Zheng Zhang
49
47
0
27 Jan 2021
Defenses Against Multi-Sticker Physical Domain Attacks on Classifiers
Defenses Against Multi-Sticker Physical Domain Attacks on Classifiers
Xinwei Zhao
Matthew C. Stamm
AAML
28
3
0
26 Jan 2021
A Transferable Anti-Forensic Attack on Forensic CNNs Using A Generative
  Adversarial Network
A Transferable Anti-Forensic Attack on Forensic CNNs Using A Generative Adversarial Network
Xinwei Zhao
Chen Chen
Matthew C. Stamm
GAN
AAML
30
4
0
23 Jan 2021
Copycat CNN: Are Random Non-Labeled Data Enough to Steal Knowledge from
  Black-box Models?
Copycat CNN: Are Random Non-Labeled Data Enough to Steal Knowledge from Black-box Models?
Jacson Rodrigues Correia-Silva
Rodrigo Berriel
C. Badue
Alberto F. de Souza
Thiago Oliveira-Santos
MLAU
34
14
0
21 Jan 2021
Understanding in Artificial Intelligence
Understanding in Artificial Intelligence
S. Maetschke
D. M. Iraola
Pieter Barnard
Elaheh Shafieibavani
Peter Zhong
Ying Xu
Antonio Jimeno Yepes
ELM
VLM
29
0
0
17 Jan 2021
Malicious Code Detection: Run Trace Output Analysis by LSTM
Malicious Code Detection: Run Trace Output Analysis by LSTM
Cengiz Acarturk
Melih Sirlanci
Pinar Gurkan Balikcioglu
Deniz Demirci
Nazenin Sahin
Ozge A. Kucuk
36
9
0
14 Jan 2021
Analysis of skin lesion images with deep learning
Analysis of skin lesion images with deep learning
Josef Steppan
S. Hanke
41
14
0
11 Jan 2021
SyReNN: A Tool for Analyzing Deep Neural Networks
SyReNN: A Tool for Analyzing Deep Neural Networks
Matthew Sotoudeh
Aditya V. Thakur
AAML
GNN
40
16
0
09 Jan 2021
Approaching Neural Network Uncertainty Realism
Approaching Neural Network Uncertainty Realism
Joachim Sicking
Alexander Kister
Matthias Fahrland
S. Eickeler
Fabian Hüger
S. Rüping
Peter Schlicht
Tim Wirtz
32
2
0
08 Jan 2021
Corner case data description and detection
Corner case data description and detection
Tinghui Ouyang
Vicent Sant Marco
Yoshinao Isobe
H. Asoh
Y. Oiwa
Yoshiki Seo
AAML
19
13
0
07 Jan 2021
Uncertainty-sensitive Activity Recognition: a Reliability Benchmark and
  the CARING Models
Uncertainty-sensitive Activity Recognition: a Reliability Benchmark and the CARING Models
Alina Roitberg
Monica Haurilet
Manuel Martínez
Rainer Stiefelhagen
UQCV
39
6
0
02 Jan 2021
Out of Order: How Important Is The Sequential Order of Words in a
  Sentence in Natural Language Understanding Tasks?
Out of Order: How Important Is The Sequential Order of Words in a Sentence in Natural Language Understanding Tasks?
Thang M. Pham
Trung Bui
Long Mai
Anh Totti Nguyen
220
122
0
30 Dec 2020
A Survey on Neural Network Interpretability
A Survey on Neural Network Interpretability
Yu Zhang
Peter Tiño
A. Leonardis
K. Tang
FaML
XAI
157
667
0
28 Dec 2020
Robustness, Privacy, and Generalization of Adversarial Training
Robustness, Privacy, and Generalization of Adversarial Training
Fengxiang He
Shaopeng Fu
Bohan Wang
Dacheng Tao
38
10
0
25 Dec 2020
On the Granularity of Explanations in Model Agnostic NLP
  Interpretability
On the Granularity of Explanations in Model Agnostic NLP Interpretability
Yves Rychener
X. Renard
Djamé Seddah
P. Frossard
Marcin Detyniecki
MILM
FAtt
16
3
0
24 Dec 2020
Unveiling Real-Life Effects of Online Photo Sharing
Unveiling Real-Life Effects of Online Photo Sharing
V. Nguyen
Adrian Daniel Popescu
Jérôme Deshayes-Chossart
25
4
0
24 Dec 2020
Evolving the Behavior of Machines: From Micro to Macroevolution
Evolving the Behavior of Machines: From Micro to Macroevolution
Jean-Baptiste Mouret
AI4CE
21
13
0
21 Dec 2020
Blurring Fools the Network -- Adversarial Attacks by Feature Peak
  Suppression and Gaussian Blurring
Blurring Fools the Network -- Adversarial Attacks by Feature Peak Suppression and Gaussian Blurring
Chenchen Zhao
Hao Li
AAML
17
2
0
21 Dec 2020
Exploiting Vulnerability of Pooling in Convolutional Neural Networks by
  Strict Layer-Output Manipulation for Adversarial Attacks
Exploiting Vulnerability of Pooling in Convolutional Neural Networks by Strict Layer-Output Manipulation for Adversarial Attacks
Chenchen Zhao
Hao Li
AAML
30
0
0
21 Dec 2020
Out-distribution aware Self-training in an Open World Setting
Out-distribution aware Self-training in an Open World Setting
Maximilian Augustin
Matthias Hein
27
7
0
21 Dec 2020
Recent advances in deep learning theory
Recent advances in deep learning theory
Fengxiang He
Dacheng Tao
AI4CE
29
50
0
20 Dec 2020
Deep Open Intent Classification with Adaptive Decision Boundary
Deep Open Intent Classification with Adaptive Decision Boundary
Hanlei Zhang
Hua Xu
Ting-En Lin
VLM
26
105
0
18 Dec 2020
Semantics and explanation: why counterfactual explanations produce
  adversarial examples in deep neural networks
Semantics and explanation: why counterfactual explanations produce adversarial examples in deep neural networks
Kieran Browne
Ben Swift
AAML
GAN
33
30
0
18 Dec 2020
Learning Prediction Intervals for Model Performance
Learning Prediction Intervals for Model Performance
Benjamin Elder
Matthew Arnold
Anupama Murthi
Jirí Navrátil
27
11
0
15 Dec 2020
Exploring Vicinal Risk Minimization for Lightweight Out-of-Distribution
  Detection
Exploring Vicinal Risk Minimization for Lightweight Out-of-Distribution Detection
Deepak Ravikumar
Sangamesh Kodge
Isha Garg
Kaushik Roy
OODD
24
5
0
15 Dec 2020
FAWA: Fast Adversarial Watermark Attack on Optical Character Recognition
  (OCR) Systems
FAWA: Fast Adversarial Watermark Attack on Optical Character Recognition (OCR) Systems
Lu Chen
Jiao Sun
Wenyuan Xu
AAML
19
16
0
15 Dec 2020
Demystifying Deep Neural Networks Through Interpretation: A Survey
Demystifying Deep Neural Networks Through Interpretation: A Survey
Giang Dao
Minwoo Lee
FaML
FAtt
22
1
0
13 Dec 2020
Dependency Decomposition and a Reject Option for Explainable Models
Dependency Decomposition and a Reject Option for Explainable Models
Jan Kronenberger
Anselm Haselhoff
FAtt
AAML
46
8
0
11 Dec 2020
Confidence Estimation via Auxiliary Models
Confidence Estimation via Auxiliary Models
Charles Corbière
Nicolas Thome
A. Saporta
Tuan-Hung Vu
Matthieu Cord
P. Pérez
TPM
34
48
0
11 Dec 2020
An Empirical Review of Adversarial Defenses
An Empirical Review of Adversarial Defenses
Ayush Goel
AAML
16
0
0
10 Dec 2020
Risk Management Framework for Machine Learning Security
Risk Management Framework for Machine Learning Security
J. Breier
A. Baldwin
H. Balinsky
Yang Liu
AAML
16
3
0
09 Dec 2020
EvaLDA: Efficient Evasion Attacks Towards Latent Dirichlet Allocation
EvaLDA: Efficient Evasion Attacks Towards Latent Dirichlet Allocation
Qi Zhou
Haipeng Chen
Yitao Zheng
Zhen Wang
AAML
14
5
0
09 Dec 2020
Quality-Diversity Optimization: a novel branch of stochastic
  optimization
Quality-Diversity Optimization: a novel branch of stochastic optimization
Konstantinos Chatzilygeroudis
Antoine Cully
Vassilis Vassiliades
Jean-Baptiste Mouret
65
91
0
08 Dec 2020
Are DNNs fooled by extremely unrecognizable images?
Are DNNs fooled by extremely unrecognizable images?
Soichiro Kumano
Hiroshi Kera
T. Yamasaki
AAML
26
3
0
07 Dec 2020
From a Fourier-Domain Perspective on Adversarial Examples to a Wiener
  Filter Defense for Semantic Segmentation
From a Fourier-Domain Perspective on Adversarial Examples to a Wiener Filter Defense for Semantic Segmentation
Nikhil Kapoor
Andreas Bär
Serin Varghese
Jan David Schneider
Fabian Hüger
Peter Schlicht
Tim Fingscheidt
AAML
37
10
0
02 Dec 2020
MAAD-Face: A Massively Annotated Attribute Dataset for Face Images
MAAD-Face: A Massively Annotated Attribute Dataset for Face Images
Philipp Terhörst
Daniel Fahrmann
Jan Niklas Kolf
Naser Damer
Florian Kirchbuchner
Arjan Kuijper
CVBM
26
37
0
02 Dec 2020
Previous
123...121314...272829
Next