ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1509.06321
  4. Cited By
Evaluating the visualization of what a Deep Neural Network has learned

Evaluating the visualization of what a Deep Neural Network has learned

21 September 2015
Wojciech Samek
Alexander Binder
G. Montavon
Sebastian Lapuschkin
K. Müller
    XAI
ArXivPDFHTML

Papers citing "Evaluating the visualization of what a Deep Neural Network has learned"

50 / 511 papers shown
Title
Explainable Adversarial Attacks in Deep Neural Networks Using Activation
  Profiles
Explainable Adversarial Attacks in Deep Neural Networks Using Activation Profiles
G. Cantareira
R. Mello
F. Paulovich
AAML
24
9
0
18 Mar 2021
Neural Network Attribution Methods for Problems in Geoscience: A Novel
  Synthetic Benchmark Dataset
Neural Network Attribution Methods for Problems in Geoscience: A Novel Synthetic Benchmark Dataset
Antonios Mamalakis
I. Ebert‐Uphoff
E. Barnes
OOD
28
75
0
18 Mar 2021
Interpretable Deep Learning for the Remote Characterisation of
  Ambulation in Multiple Sclerosis using Smartphones
Interpretable Deep Learning for the Remote Characterisation of Ambulation in Multiple Sclerosis using Smartphones
Andrew P. Creagh
F. Lipsmeier
M. Lindemann
M. D. Vos
24
17
0
16 Mar 2021
Have We Learned to Explain?: How Interpretability Methods Can Learn to
  Encode Predictions in their Interpretations
Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their Interpretations
N. Jethani
Mukund Sudarshan
Yindalon Aphinyanagphongs
Rajesh Ranganath
FAtt
82
70
0
02 Mar 2021
Do Input Gradients Highlight Discriminative Features?
Do Input Gradients Highlight Discriminative Features?
Harshay Shah
Prateek Jain
Praneeth Netrapalli
AAML
FAtt
23
57
0
25 Feb 2021
Resilience of Bayesian Layer-Wise Explanations under Adversarial Attacks
Resilience of Bayesian Layer-Wise Explanations under Adversarial Attacks
Ginevra Carbone
G. Sanguinetti
Luca Bortolussi
FAtt
AAML
21
4
0
22 Feb 2021
MIMIC-IF: Interpretability and Fairness Evaluation of Deep Learning
  Models on MIMIC-IV Dataset
MIMIC-IF: Interpretability and Fairness Evaluation of Deep Learning Models on MIMIC-IV Dataset
Chuizheng Meng
Loc Trinh
Nan Xu
Yan Liu
24
30
0
12 Feb 2021
Convolutional Neural Network Interpretability with General Pattern
  Theory
Convolutional Neural Network Interpretability with General Pattern Theory
Erico Tjoa
Cuntai Guan
FAtt
AI4CE
18
6
0
05 Feb 2021
Evaluating Input Perturbation Methods for Interpreting CNNs and Saliency
  Map Comparison
Evaluating Input Perturbation Methods for Interpreting CNNs and Saliency Map Comparison
Lukas Brunke
Prateek Agrawal
Nikhil George
AAML
FAtt
16
13
0
26 Jan 2021
Benchmarking Perturbation-based Saliency Maps for Explaining Atari
  Agents
Benchmarking Perturbation-based Saliency Maps for Explaining Atari Agents
Tobias Huber
Benedikt Limmer
Elisabeth André
FAtt
20
14
0
18 Jan 2021
Explainability of deep vision-based autonomous driving systems: Review
  and challenges
Explainability of deep vision-based autonomous driving systems: Review and challenges
Éloi Zablocki
H. Ben-younes
P. Pérez
Matthieu Cord
XAI
42
170
0
13 Jan 2021
Towards Interpretable Ensemble Learning for Image-based Malware
  Detection
Towards Interpretable Ensemble Learning for Image-based Malware Detection
Yuzhou Lin
Xiaolin Chang
AAML
15
8
0
13 Jan 2021
Explaining the Black-box Smoothly- A Counterfactual Approach
Explaining the Black-box Smoothly- A Counterfactual Approach
Junyu Chen
Yong Du
Yufan He
W. Paul Segars
Ye Li
MedIm
FAtt
65
100
0
11 Jan 2021
Quantitative Evaluations on Saliency Methods: An Experimental Study
Quantitative Evaluations on Saliency Methods: An Experimental Study
Xiao-hui Li
Yuhan Shi
Haoyang Li
Wei Bai
Yuanwei Song
Caleb Chen Cao
Lei Chen
FAtt
XAI
42
18
0
31 Dec 2020
A Survey on Neural Network Interpretability
A Survey on Neural Network Interpretability
Yu Zhang
Peter Tiño
A. Leonardis
K. Tang
FaML
XAI
144
661
0
28 Dec 2020
Towards Robust Explanations for Deep Neural Networks
Towards Robust Explanations for Deep Neural Networks
Ann-Kathrin Dombrowski
Christopher J. Anders
K. Müller
Pan Kessel
FAtt
21
63
0
18 Dec 2020
Improving 3D convolutional neural network comprehensibility via
  interactive visualization of relevance maps: Evaluation in Alzheimer's
  disease
Improving 3D convolutional neural network comprehensibility via interactive visualization of relevance maps: Evaluation in Alzheimer's disease
M. Dyrba
Moritz Hanzig
S. Altenstein
Sebastian Bader
Tommaso Ballarini
...
B. Ertl-Wagner
M. Wagner
J. Wiltfang
F. Jessen
S. Teipel
FAtt
MedIm
39
51
0
18 Dec 2020
Improving healthcare access management by predicting patient no-show
  behaviour
Improving healthcare access management by predicting patient no-show behaviour
David Barrera Ferro
S. Brailsford
Cristián Bravo
Honora K. Smith
16
32
0
10 Dec 2020
Interpretable Graph Capsule Networks for Object Recognition
Interpretable Graph Capsule Networks for Object Recognition
Jindong Gu
Volker Tresp
FAtt
19
36
0
03 Dec 2020
Improving Interpretability in Medical Imaging Diagnosis using
  Adversarial Training
Improving Interpretability in Medical Imaging Diagnosis using Adversarial Training
Andrei Margeloiu
Nikola Simidjievski
M. Jamnik
Adrian Weller
GAN
AAML
MedIm
FAtt
21
18
0
02 Dec 2020
Achievements and Challenges in Explaining Deep Learning based
  Computer-Aided Diagnosis Systems
Achievements and Challenges in Explaining Deep Learning based Computer-Aided Diagnosis Systems
Adriano Lucieri
Muhammad Naseer Bajwa
Andreas Dengel
Sheraz Ahmed
41
10
0
26 Nov 2020
Quantifying Explainers of Graph Neural Networks in Computational
  Pathology
Quantifying Explainers of Graph Neural Networks in Computational Pathology
Guillaume Jaume
Pushpak Pati
Behzad Bozorgtabar
Antonio Foncubierta-Rodríguez
Florinda Feroce
A. Anniciello
T. Rau
Jean-Philippe Thiran
M. Gabrani
O. Goksel
FAtt
26
76
0
25 Nov 2020
Backdoor Attacks on the DNN Interpretation System
Backdoor Attacks on the DNN Interpretation System
Shihong Fang
A. Choromańska
FAtt
AAML
29
19
0
21 Nov 2020
One Explanation is Not Enough: Structured Attention Graphs for Image
  Classification
One Explanation is Not Enough: Structured Attention Graphs for Image Classification
Vivswan Shitole
Li Fuxin
Minsuk Kahng
Prasad Tadepalli
Alan Fern
FAtt
GNN
19
38
0
13 Nov 2020
Generalized Constraints as A New Mathematical Problem in Artificial
  Intelligence: A Review and Perspective
Generalized Constraints as A New Mathematical Problem in Artificial Intelligence: A Review and Perspective
Bao-Gang Hu
Hanbing Qu
AI4CE
28
1
0
12 Nov 2020
GANMEX: One-vs-One Attributions Guided by GAN-based Counterfactual
  Explanation Baselines
GANMEX: One-vs-One Attributions Guided by GAN-based Counterfactual Explanation Baselines
Sheng-Min Shih
Pin-Ju Tien
Zohar Karnin
FAtt
11
14
0
11 Nov 2020
Toward Scalable and Unified Example-based Explanation and Outlier
  Detection
Toward Scalable and Unified Example-based Explanation and Outlier Detection
Penny Chong
Ngai-man Cheung
Yuval Elovici
Alexander Binder
16
9
0
11 Nov 2020
Debugging Tests for Model Explanations
Debugging Tests for Model Explanations
Julius Adebayo
M. Muelly
Ilaria Liccardi
Been Kim
FAtt
19
177
0
10 Nov 2020
What Did You Think Would Happen? Explaining Agent Behaviour Through
  Intended Outcomes
What Did You Think Would Happen? Explaining Agent Behaviour Through Intended Outcomes
Herman Yau
Chris Russell
Simon Hadfield
FAtt
LRM
28
36
0
10 Nov 2020
Objective Diagnosis for Histopathological Images Based on Machine
  Learning Techniques: Classical Approaches and New Trends
Objective Diagnosis for Histopathological Images Based on Machine Learning Techniques: Classical Approaches and New Trends
Naira Elazab
Hassan H. Soliman
Shaker El-Sappagh
S. Islam
Mohammed M Elmogy
17
20
0
10 Nov 2020
It's All in the Name: A Character Based Approach To Infer Religion
It's All in the Name: A Character Based Approach To Infer Religion
Rochana Chaturvedi
Sugat Chaturvedi
24
23
0
27 Oct 2020
Benchmarking Deep Learning Interpretability in Time Series Predictions
Benchmarking Deep Learning Interpretability in Time Series Predictions
Aya Abdelsalam Ismail
Mohamed K. Gunady
H. C. Bravo
S. Feizi
XAI
AI4TS
FAtt
19
168
0
26 Oct 2020
Investigating Saturation Effects in Integrated Gradients
Investigating Saturation Effects in Integrated Gradients
Vivek Miglani
Narine Kokhlikyan
B. Alsallakh
Miguel Martin
Orion Reblitz-Richardson
FAtt
21
23
0
23 Oct 2020
An explainable deep vision system for animal classification and
  detection in trail-camera images with automatic post-deployment retraining
An explainable deep vision system for animal classification and detection in trail-camera images with automatic post-deployment retraining
Golnaz Moallem
Don Pathirage
Joel Reznick
J. Gallagher
H. Sari-Sarraf
9
13
0
22 Oct 2020
ERIC: Extracting Relations Inferred from Convolutions
ERIC: Extracting Relations Inferred from Convolutions
Joe Townsend
Theodoros Kasioumis
Hiroya Inakoshi
NAI
FAtt
18
16
0
19 Oct 2020
A general approach to compute the relevance of middle-level input
  features
A general approach to compute the relevance of middle-level input features
Andrea Apicella
Salvatore Giugliano
Francesco Isgrò
R. Prevete
12
6
0
16 Oct 2020
Evaluating Attribution Methods using White-Box LSTMs
Evaluating Attribution Methods using White-Box LSTMs
Sophie Hao
FAtt
XAI
12
8
0
16 Oct 2020
Convolutional Neural Network for Blur Images Detection as an Alternative
  for Laplacian Method
Convolutional Neural Network for Blur Images Detection as an Alternative for Laplacian Method
Tomasz Szandała
27
14
0
15 Oct 2020
Learning Propagation Rules for Attribution Map Generation
Learning Propagation Rules for Attribution Map Generation
Yiding Yang
Jiayan Qiu
Xiuming Zhang
Dacheng Tao
Xinchao Wang
FAtt
38
17
0
14 Oct 2020
A Graph Neural Network Framework for Causal Inference in Brain Networks
A Graph Neural Network Framework for Causal Inference in Brain Networks
S. Wein
W. Malloni
A. Tomé
Sebastian M. Frank
Gina-Isabelle Henze
S. Wüst
M. Greenlee
E. Lang
27
50
0
14 Oct 2020
Evaluating and Characterizing Human Rationales
Evaluating and Characterizing Human Rationales
Samuel Carton
Anirudh Rathore
Chenhao Tan
19
48
0
09 Oct 2020
Simplifying the explanation of deep neural networks with sufficient and
  necessary feature-sets: case of text classification
Simplifying the explanation of deep neural networks with sufficient and necessary feature-sets: case of text classification
Florentin Flambeau Jiechieu Kameni
Norbert Tsopzé
XAI
FAtt
MedIm
19
1
0
08 Oct 2020
Learning Variational Word Masks to Improve the Interpretability of
  Neural Text Classifiers
Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers
Hanjie Chen
Yangfeng Ji
AAML
VLM
15
63
0
01 Oct 2020
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI)
  Interpretability through Neural Backdoors
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors
Yi-Shan Lin
Wen-Chuan Lee
Z. Berkay Celik
XAI
29
93
0
22 Sep 2020
Survey of explainable machine learning with visual and granular methods
  beyond quasi-explanations
Survey of explainable machine learning with visual and granular methods beyond quasi-explanations
Boris Kovalerchuk
M. Ahmad
University of Washington Tacoma
8
42
0
21 Sep 2020
A Multisensory Learning Architecture for Rotation-invariant Object
  Recognition
A Multisensory Learning Architecture for Rotation-invariant Object Recognition
M. Kirtay
G. Schillaci
Verena V. Hafner
22
0
0
14 Sep 2020
How Good is your Explanation? Algorithmic Stability Measures to Assess
  the Quality of Explanations for Deep Neural Networks
How Good is your Explanation? Algorithmic Stability Measures to Assess the Quality of Explanations for Deep Neural Networks
Thomas Fel
David Vigouroux
Rémi Cadène
Thomas Serre
XAI
FAtt
31
31
0
07 Sep 2020
Quantifying Explainability of Saliency Methods in Deep Neural Networks
  with a Synthetic Dataset
Quantifying Explainability of Saliency Methods in Deep Neural Networks with a Synthetic Dataset
Erico Tjoa
Cuntai Guan
XAI
FAtt
13
27
0
07 Sep 2020
Generalization on the Enhancement of Layerwise Relevance
  Interpretability of Deep Neural Network
Generalization on the Enhancement of Layerwise Relevance Interpretability of Deep Neural Network
Erico Tjoa
Cuntai Guan
FAtt
13
0
0
05 Sep 2020
A Unified Taylor Framework for Revisiting Attribution Methods
A Unified Taylor Framework for Revisiting Attribution Methods
Huiqi Deng
Na Zou
Mengnan Du
Weifu Chen
Guo-Can Feng
Xia Hu
FAtt
TDI
35
21
0
21 Aug 2020
Previous
123...10116789
Next