ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1509.06321
  4. Cited By
Evaluating the visualization of what a Deep Neural Network has learned

Evaluating the visualization of what a Deep Neural Network has learned

21 September 2015
Wojciech Samek
Alexander Binder
G. Montavon
Sebastian Lapuschkin
K. Müller
    XAI
ArXivPDFHTML

Papers citing "Evaluating the visualization of what a Deep Neural Network has learned"

50 / 511 papers shown
Title
SCAAT: Improving Neural Network Interpretability via Saliency
  Constrained Adaptive Adversarial Training
SCAAT: Improving Neural Network Interpretability via Saliency Constrained Adaptive Adversarial Training
Rui Xu
Wenkang Qin
Peixiang Huang
Hao Wang
Lin Luo
FAtt
AAML
28
2
0
09 Nov 2023
Be Careful When Evaluating Explanations Regarding Ground Truth
Be Careful When Evaluating Explanations Regarding Ground Truth
Hubert Baniecki
Maciej Chrabaszcz
Andreas Holzinger
Bastian Pfeifer
Anna Saranti
P. Biecek
FAtt
AAML
46
3
0
08 Nov 2023
Assessing Fidelity in XAI post-hoc techniques: A Comparative Study with
  Ground Truth Explanations Datasets
Assessing Fidelity in XAI post-hoc techniques: A Comparative Study with Ground Truth Explanations Datasets
Miquel Miró-Nicolau
Antoni Jaume-i-Capó
Gabriel Moyà Alcover
XAI
42
11
0
03 Nov 2023
Understanding Parameter Saliency via Extreme Value Theory
Understanding Parameter Saliency via Extreme Value Theory
Shuo Wang
Issei Sato
AAML
FAtt
21
0
0
27 Oct 2023
Insightful analysis of historical sources at scales beyond human
  capabilities using unsupervised Machine Learning and XAI
Insightful analysis of historical sources at scales beyond human capabilities using unsupervised Machine Learning and XAI
Oliver Eberle
Jochen Büttner
Hassan el-Hajj
G. Montavon
Klaus-Robert Muller
Matteo Valleriani
23
1
0
13 Oct 2023
Faithfulness Measurable Masked Language Models
Faithfulness Measurable Masked Language Models
Andreas Madsen
Siva Reddy
Sarath Chandar
38
3
0
11 Oct 2023
AttributionLab: Faithfulness of Feature Attribution Under Controllable
  Environments
AttributionLab: Faithfulness of Feature Attribution Under Controllable Environments
Yang Zhang
Yawei Li
Hannah Brown
Mina Rezaei
Bernd Bischl
Philip Torr
Ashkan Khakzar
Kenji Kawaguchi
OOD
55
1
0
10 Oct 2023
Explaining Deep Face Algorithms through Visualization: A Survey
Explaining Deep Face Algorithms through Visualization: A Survey
Thrupthi Ann
S. M. I. C. V. Balasubramanian
M. Jawahar
CVBM
32
1
0
26 Sep 2023
Goodhart's Law Applies to NLP's Explanation Benchmarks
Goodhart's Law Applies to NLP's Explanation Benchmarks
Jennifer Hsia
Danish Pruthi
Aarti Singh
Zachary Chase Lipton
30
6
0
28 Aug 2023
On Gradient-like Explanation under a Black-box Setting: When Black-box
  Explanations Become as Good as White-box
On Gradient-like Explanation under a Black-box Setting: When Black-box Explanations Become as Good as White-box
Yingcheng Cai
Gerhard Wunder
FAtt
31
0
0
18 Aug 2023
A Dual-Perspective Approach to Evaluating Feature Attribution Methods
A Dual-Perspective Approach to Evaluating Feature Attribution Methods
Yawei Li
Yanglin Zhang
Kenji Kawaguchi
Ashkan Khakzar
Bernd Bischl
Mina Rezaei
FAtt
XAI
47
0
0
17 Aug 2023
Robust Infidelity: When Faithfulness Measures on Masked Language Models
  Are Misleading
Robust Infidelity: When Faithfulness Measures on Masked Language Models Are Misleading
Evan Crothers
H. Viktor
Nathalie Japkowicz
AAML
19
1
0
13 Aug 2023
FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of
  Explainable AI Methods
FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of Explainable AI Methods
Robin Hesse
Simone Schaub-Meyer
Stefan Roth
AAML
37
32
0
11 Aug 2023
SAFE: Saliency-Aware Counterfactual Explanations for DNN-based Automated
  Driving Systems
SAFE: Saliency-Aware Counterfactual Explanations for DNN-based Automated Driving Systems
Amir Samadi
A. Shirian
K. Koufos
Kurt Debattista
M. Dianati
AAML
FAtt
LRM
26
8
0
28 Jul 2023
Discriminative Feature Attributions: Bridging Post Hoc Explainability
  and Inherent Interpretability
Discriminative Feature Attributions: Bridging Post Hoc Explainability and Inherent Interpretability
Usha Bhalla
Suraj Srinivas
Himabindu Lakkaraju
FAtt
CML
29
6
0
27 Jul 2023
A New Perspective on Evaluation Methods for Explainable Artificial
  Intelligence (XAI)
A New Perspective on Evaluation Methods for Explainable Artificial Intelligence (XAI)
Timo Speith
Markus Langer
29
12
0
26 Jul 2023
Saliency strikes back: How filtering out high frequencies improves
  white-box explanations
Saliency strikes back: How filtering out high frequencies improves white-box explanations
Sabine Muzellec
Thomas Fel
Victor Boutin
Léo Andéol
R. V. Rullen
Thomas Serre
FAtt
30
0
0
18 Jul 2023
Visual Explanations with Attributions and Counterfactuals on Time Series
  Classification
Visual Explanations with Attributions and Counterfactuals on Time Series Classification
U. Schlegel
Daniela Oelke
Daniel A. Keim
Mennatallah El-Assady
AI4TS
FAtt
33
4
0
14 Jul 2023
Exploring the Lottery Ticket Hypothesis with Explainability Methods:
  Insights into Sparse Network Performance
Exploring the Lottery Ticket Hypothesis with Explainability Methods: Insights into Sparse Network Performance
Shantanu Ghosh
Kayhan Batmanghelich
30
0
0
07 Jul 2023
Dividing and Conquering a BlackBox to a Mixture of Interpretable Models:
  Route, Interpret, Repeat
Dividing and Conquering a BlackBox to a Mixture of Interpretable Models: Route, Interpret, Repeat
Shantanu Ghosh
K. Yu
Forough Arabshahi
Kayhan Batmanghelich
MoE
26
13
0
07 Jul 2023
Harmonizing Feature Attributions Across Deep Learning Architectures:
  Enhancing Interpretability and Consistency
Harmonizing Feature Attributions Across Deep Learning Architectures: Enhancing Interpretability and Consistency
Md Abdul Kadir
G. Addluri
Daniel Sonntag
FAtt
19
1
0
05 Jul 2023
Does Saliency-Based Training bring Robustness for Deep Neural Networks
  in Image Classification?
Does Saliency-Based Training bring Robustness for Deep Neural Networks in Image Classification?
Ali Karkehabadi
FAtt
AAML
8
0
0
28 Jun 2023
Towards Explainable Evaluation Metrics for Machine Translation
Towards Explainable Evaluation Metrics for Machine Translation
Christoph Leiter
Piyawat Lertvittayakumjorn
M. Fomicheva
Wei-Ye Zhao
Yang Gao
Steffen Eger
ELM
30
13
0
22 Jun 2023
ProtoGate: Prototype-based Neural Networks with Global-to-local Feature
  Selection for Tabular Biomedical Data
ProtoGate: Prototype-based Neural Networks with Global-to-local Feature Selection for Tabular Biomedical Data
Xiangjian Jiang
Andrei Margeloiu
Nikola Simidjievski
M. Jamnik
OOD
34
10
0
21 Jun 2023
Explainable AI and Machine Learning Towards Human Gait Deterioration
  Analysis
Explainable AI and Machine Learning Towards Human Gait Deterioration Analysis
Abdullah Alharthi
19
0
0
12 Jun 2023
Two-Stage Holistic and Contrastive Explanation of Image Classification
Two-Stage Holistic and Contrastive Explanation of Image Classification
Weiyan Xie
Xiao-hui Li
Zhi Lin
Leonard K. M. Poon
Caleb Chen Cao
N. Zhang
24
2
0
10 Jun 2023
Strategies to exploit XAI to improve classification systems
Strategies to exploit XAI to improve classification systems
Andrea Apicella
Luca Di Lorenzo
Francesco Isgrò
A. Pollastro
R. Prevete
11
9
0
09 Jun 2023
DecompX: Explaining Transformers Decisions by Propagating Token
  Decomposition
DecompX: Explaining Transformers Decisions by Propagating Token Decomposition
Ali Modarressi
Mohsen Fayyaz
Ehsan Aghazadeh
Yadollah Yaghoobzadeh
Mohammad Taher Pilehvar
25
25
0
05 Jun 2023
Theoretical Behavior of XAI Methods in the Presence of Suppressor
  Variables
Theoretical Behavior of XAI Methods in the Presence of Suppressor Variables
Rick Wilming
Leo Kieslich
Benedict Clark
Stefan Haufe
19
9
0
02 Jun 2023
Can We Trust Explainable AI Methods on ASR? An Evaluation on Phoneme
  Recognition
Can We Trust Explainable AI Methods on ASR? An Evaluation on Phoneme Recognition
Xiao-lan Wu
P. Bell
A. Rajan
19
5
0
29 May 2023
Counterfactual Explainer Framework for Deep Reinforcement Learning
  Models Using Policy Distillation
Counterfactual Explainer Framework for Deep Reinforcement Learning Models Using Policy Distillation
Amir Samadi
K. Koufos
Kurt Debattista
M. Dianati
OffRL
37
3
0
25 May 2023
An Experimental Investigation into the Evaluation of Explainability
  Methods
An Experimental Investigation into the Evaluation of Explainability Methods
Sédrick Stassin
A. Englebert
Géraldin Nanfack
Julien Albert
Nassim Versbraegen
Gilles Peiffer
Miriam Doh
Nicolas Riche
Benoit Frénay
Christophe De Vleeschouwer
XAI
ELM
16
5
0
25 May 2023
Explain Any Concept: Segment Anything Meets Concept-Based Explanation
Explain Any Concept: Segment Anything Meets Concept-Based Explanation
Ao Sun
Pingchuan Ma
Yuanyuan Yuan
Shuai Wang
LLMAG
23
31
0
17 May 2023
Human Attention-Guided Explainable Artificial Intelligence for Computer
  Vision Models
Human Attention-Guided Explainable Artificial Intelligence for Computer Vision Models
Guoyang Liu
Jindi Zhang
Antoni B. Chan
J. H. Hsiao
29
14
0
05 May 2023
Evaluating Post-hoc Interpretability with Intrinsic Interpretability
Evaluating Post-hoc Interpretability with Intrinsic Interpretability
J. P. Amorim
P. Abreu
João A. M. Santos
Henning Muller
FAtt
30
1
0
04 May 2023
On Pitfalls of $\textit{RemOve-And-Retrain}$: Data Processing Inequality
  Perspective
On Pitfalls of RemOve-And-Retrain\textit{RemOve-And-Retrain}RemOve-And-Retrain: Data Processing Inequality Perspective
J. Song
Keumgang Cha
Junghoon Seo
40
2
0
26 Apr 2023
Robustness of Visual Explanations to Common Data Augmentation
Robustness of Visual Explanations to Common Data Augmentation
Lenka Tětková
Lars Kai Hansen
AAML
26
6
0
18 Apr 2023
ODAM: Gradient-based instance-specific visual explanations for object
  detection
ODAM: Gradient-based instance-specific visual explanations for object detection
Chenyang Zhao
Antoni B. Chan
FAtt
21
8
0
13 Apr 2023
Explainable Artificial Intelligence Architecture for Melanoma Diagnosis
  Using Indicator Localization and Self-Supervised Learning
Explainable Artificial Intelligence Architecture for Melanoma Diagnosis Using Indicator Localization and Self-Supervised Learning
Ruitong Sun
Mohammad Rostami
16
2
0
26 Mar 2023
Analyzing Effects of Mixed Sample Data Augmentation on Model
  Interpretability
Analyzing Effects of Mixed Sample Data Augmentation on Model Interpretability
Soyoun Won
Sung-Ho Bae
Seong Tae Kim
50
2
0
26 Mar 2023
Better Understanding Differences in Attribution Methods via Systematic
  Evaluations
Better Understanding Differences in Attribution Methods via Systematic Evaluations
Sukrut Rao
Moritz D Boehle
Bernt Schiele
XAI
29
2
0
21 Mar 2023
EvalAttAI: A Holistic Approach to Evaluating Attribution Maps in Robust
  and Non-Robust Models
EvalAttAI: A Holistic Approach to Evaluating Attribution Maps in Robust and Non-Robust Models
Ian E. Nielsen
Ravichandran Ramachandran
N. Bouaynaya
Hassan M. Fathallah-Shaykh
Ghulam Rasool
AAML
FAtt
41
7
0
15 Mar 2023
Pixel-Level Explanation of Multiple Instance Learning Models in
  Biomedical Single Cell Images
Pixel-Level Explanation of Multiple Instance Learning Models in Biomedical Single Cell Images
Ario Sadafi
Oleksandra Adonkina
Ashkan Khakzar
P. Lienemann
Rudolf Matthias Hehr
Daniel Rueckert
Nassir Navab
Carsten Marr
FAtt
48
10
0
15 Mar 2023
Explainable AI for Time Series via Virtual Inspection Layers
Explainable AI for Time Series via Virtual Inspection Layers
Johanna Vielhaben
Sebastian Lapuschkin
G. Montavon
Wojciech Samek
XAI
AI4TS
18
25
0
11 Mar 2023
On the Soundness of XAI in Prognostics and Health Management (PHM)
On the Soundness of XAI in Prognostics and Health Management (PHM)
D. Martín
Juan Galán Páez
J. Borrego-Díaz
45
12
0
09 Mar 2023
Feature Perturbation Augmentation for Reliable Evaluation of Importance
  Estimators in Neural Networks
Feature Perturbation Augmentation for Reliable Evaluation of Importance Estimators in Neural Networks
L. Brocki
N. C. Chung
FAtt
AAML
43
11
0
02 Mar 2023
Finding the right XAI method -- A Guide for the Evaluation and Ranking
  of Explainable AI Methods in Climate Science
Finding the right XAI method -- A Guide for the Evaluation and Ranking of Explainable AI Methods in Climate Science
P. Bommer
M. Kretschmer
Anna Hedström
Dilyara Bareeva
Marina M.-C. Höhne
46
38
0
01 Mar 2023
Don't be fooled: label leakage in explanation methods and the importance
  of their quantitative evaluation
Don't be fooled: label leakage in explanation methods and the importance of their quantitative evaluation
N. Jethani
A. Saporta
Rajesh Ranganath
FAtt
29
10
0
24 Feb 2023
The Generalizability of Explanations
The Generalizability of Explanations
Hanxiao Tan
FAtt
18
1
0
23 Feb 2023
Tell Model Where to Attend: Improving Interpretability of Aspect-Based
  Sentiment Classification via Small Explanation Annotations
Tell Model Where to Attend: Improving Interpretability of Aspect-Based Sentiment Classification via Small Explanation Annotations
Zhenxiao Cheng
Jie Zhou
Wen Wu
Qin Chen
Liang He
32
3
0
21 Feb 2023
Previous
123456...91011
Next