ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1509.06321
  4. Cited By
Evaluating the visualization of what a Deep Neural Network has learned

Evaluating the visualization of what a Deep Neural Network has learned

21 September 2015
Wojciech Samek
Alexander Binder
G. Montavon
Sebastian Lapuschkin
K. Müller
    XAI
ArXivPDFHTML

Papers citing "Evaluating the visualization of what a Deep Neural Network has learned"

50 / 511 papers shown
Title
Dual Decomposition of Convex Optimization Layers for Consistent
  Attention in Medical Images
Dual Decomposition of Convex Optimization Layers for Consistent Attention in Medical Images
Tom Ron
M. Weiler-Sagie
Tamir Hazan
FAtt
MedIm
24
6
0
06 Jun 2022
Comparing interpretation methods in mental state decoding analyses with
  deep learning models
Comparing interpretation methods in mental state decoding analyses with deep learning models
A. Thomas
Christopher Ré
R. Poldrack
AI4CE
18
2
0
31 May 2022
How explainable are adversarially-robust CNNs?
How explainable are adversarially-robust CNNs?
Mehdi Nourelahi
Lars Kotthoff
Peijie Chen
Anh Totti Nguyen
AAML
FAtt
22
8
0
25 May 2022
Deletion and Insertion Tests in Regression Models
Deletion and Insertion Tests in Regression Models
Naofumi Hama
Masayoshi Mase
Art B. Owen
27
8
0
25 May 2022
Towards Better Understanding Attribution Methods
Towards Better Understanding Attribution Methods
Sukrut Rao
Moritz Bohle
Bernt Schiele
XAI
18
32
0
20 May 2022
The Solvability of Interpretability Evaluation Metrics
The Solvability of Interpretability Evaluation Metrics
Yilun Zhou
J. Shah
70
8
0
18 May 2022
Explainable Deep Learning Methods in Medical Image Classification: A
  Survey
Explainable Deep Learning Methods in Medical Image Classification: A Survey
Cristiano Patrício
João C. Neves
Luís F. Teixeira
XAI
24
52
0
10 May 2022
Explain to Not Forget: Defending Against Catastrophic Forgetting with
  XAI
Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI
Sami Ede
Serop Baghdadlian
Leander Weber
A. Nguyen
Dario Zanca
Wojciech Samek
Sebastian Lapuschkin
CLL
27
6
0
04 May 2022
ExSum: From Local Explanations to Model Understanding
ExSum: From Local Explanations to Model Understanding
Yilun Zhou
Marco Tulio Ribeiro
J. Shah
FAtt
LRM
19
25
0
30 Apr 2022
Missingness Bias in Model Debugging
Missingness Bias in Model Debugging
Saachi Jain
Hadi Salman
E. Wong
Pengchuan Zhang
Vibhav Vineet
Sai H. Vemprala
A. Madry
27
37
0
19 Apr 2022
Interpretability of Machine Learning Methods Applied to Neuroimaging
Interpretability of Machine Learning Methods Applied to Neuroimaging
Elina Thibeau-Sutre
S. Collin
Ninon Burgos
O. Colliot
16
4
0
14 Apr 2022
Maximum Entropy Baseline for Integrated Gradients
Maximum Entropy Baseline for Integrated Gradients
Hanxiao Tan
FAtt
18
4
0
12 Apr 2022
Interpretable Research Replication Prediction via Variational Contextual
  Consistency Sentence Masking
Interpretable Research Replication Prediction via Variational Contextual Consistency Sentence Masking
Tianyi Luo
Rui Meng
Qing Guo
Y. Liu
12
4
0
28 Mar 2022
A Unified Study of Machine Learning Explanation Evaluation Metrics
A Unified Study of Machine Learning Explanation Evaluation Metrics
Yipei Wang
Xiaoqian Wang
XAI
19
7
0
27 Mar 2022
Towards Explainable Evaluation Metrics for Natural Language Generation
Towards Explainable Evaluation Metrics for Natural Language Generation
Christoph Leiter
Piyawat Lertvittayakumjorn
M. Fomicheva
Wei-Ye Zhao
Yang Gao
Steffen Eger
AAML
ELM
27
20
0
21 Mar 2022
Don't Get Me Wrong: How to Apply Deep Visual Interpretations to Time
  Series
Don't Get Me Wrong: How to Apply Deep Visual Interpretations to Time Series
Christoffer Loeffler
Wei-Cheng Lai
Bjoern M. Eskofier
Dario Zanca
Lukas M. Schmidt
Christopher Mutschler
FAtt
AI4TS
35
5
0
14 Mar 2022
Evaluating Explainable AI on a Multi-Modal Medical Imaging Task: Can
  Existing Algorithms Fulfill Clinical Requirements?
Evaluating Explainable AI on a Multi-Modal Medical Imaging Task: Can Existing Algorithms Fulfill Clinical Requirements?
Weina Jin
Xiaoxiao Li
Ghassan Hamarneh
27
51
0
12 Mar 2022
Fidelity of Interpretability Methods and Perturbation Artifacts in
  Neural Networks
Fidelity of Interpretability Methods and Perturbation Artifacts in Neural Networks
L. Brocki
N. C. Chung
AAML
16
4
0
06 Mar 2022
Do Explanations Explain? Model Knows Best
Do Explanations Explain? Model Knows Best
Ashkan Khakzar
Pedram J. Khorsandi
Rozhin Nobahari
Nassir Navab
XAI
AAML
FAtt
11
23
0
04 Mar 2022
Threading the Needle of On and Off-Manifold Value Functions for Shapley
  Explanations
Threading the Needle of On and Off-Manifold Value Functions for Shapley Explanations
Chih-Kuan Yeh
Kuan-Yun Lee
Frederick Liu
Pradeep Ravikumar
FAtt
TDI
23
9
0
24 Feb 2022
Training Characteristic Functions with Reinforcement Learning:
  XAI-methods play Connect Four
Training Characteristic Functions with Reinforcement Learning: XAI-methods play Connect Four
S. Wäldchen
Felix Huber
Sebastian Pokutta
FAtt
28
8
0
23 Feb 2022
Evaluating Feature Attribution Methods in the Image Domain
Evaluating Feature Attribution Methods in the Image Domain
Arne Gevaert
Axel-Jan Rousseau
Thijs Becker
D. Valkenborg
T. D. Bie
Yvan Saeys
FAtt
24
22
0
22 Feb 2022
Guidelines and Evaluation of Clinical Explainable AI in Medical Image
  Analysis
Guidelines and Evaluation of Clinical Explainable AI in Medical Image Analysis
Weina Jin
Xiaoxiao Li
M. Fatehi
Ghassan Hamarneh
ELM
XAI
42
88
0
16 Feb 2022
Don't Lie to Me! Robust and Efficient Explainability with Verified
  Perturbation Analysis
Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis
Thomas Fel
Mélanie Ducoffe
David Vigouroux
Rémi Cadène
Mikael Capelle
C. Nicodeme
Thomas Serre
AAML
26
41
0
15 Feb 2022
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural
  Network Explanations and Beyond
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond
Anna Hedström
Leander Weber
Dilyara Bareeva
Daniel G. Krakowczyk
Franz Motzkus
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
XAI
ELM
21
168
0
14 Feb 2022
Measurably Stronger Explanation Reliability via Model Canonization
Measurably Stronger Explanation Reliability via Model Canonization
Franz Motzkus
Leander Weber
Sebastian Lapuschkin
FAtt
12
6
0
14 Feb 2022
InterpretTime: a new approach for the systematic evaluation of
  neural-network interpretability in time series classification
InterpretTime: a new approach for the systematic evaluation of neural-network interpretability in time series classification
Hugues Turbé
Mina Bjelogrlic
Christian Lovis
G. Mengaldo
AI4TS
22
6
0
11 Feb 2022
A Consistent and Efficient Evaluation Strategy for Attribution Methods
A Consistent and Efficient Evaluation Strategy for Attribution Methods
Yao Rong
Tobias Leemann
V. Borisov
Gjergji Kasneci
Enkelejda Kasneci
FAtt
23
92
0
01 Feb 2022
Feature Visualization within an Automated Design Assessment leveraging
  Explainable Artificial Intelligence Methods
Feature Visualization within an Automated Design Assessment leveraging Explainable Artificial Intelligence Methods
Raoul Schönhof
Artem Werner
J. Elstner
Boldizsar Zopcsak
Ramez Awad
Marco F. Huber
AAML
16
12
0
28 Jan 2022
Model Agnostic Interpretability for Multiple Instance Learning
Model Agnostic Interpretability for Multiple Instance Learning
Joseph Early
C. Evers
Sarvapali Ramchurn
11
11
0
27 Jan 2022
PREVIS -- A Combined Machine Learning and Visual Interpolation Approach
  for Interactive Reverse Engineering in Assembly Quality Control
PREVIS -- A Combined Machine Learning and Visual Interpolation Approach for Interactive Reverse Engineering in Assembly Quality Control
Patrick Ruediger-Flore
Felix Claus
V. Leonhardt
H. Hagen
J. Aurich
Christoph Garth
19
0
0
25 Jan 2022
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic
  Review on Evaluating Explainable AI
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI
Meike Nauta
Jan Trienes
Shreyasi Pathak
Elisa Nguyen
Michelle Peters
Yasmin Schmitt
Jorg Schlotterer
M. V. Keulen
C. Seifert
ELM
XAI
28
396
0
20 Jan 2022
Cyberbullying Classifiers are Sensitive to Model-Agnostic Perturbations
Cyberbullying Classifiers are Sensitive to Model-Agnostic Perturbations
Chris Emmery
Ákos Kádár
Grzegorz Chrupała
Walter Daelemans
19
5
0
17 Jan 2022
Improving Deep Neural Network Classification Confidence using
  Heatmap-based eXplainable AI
Improving Deep Neural Network Classification Confidence using Heatmap-based eXplainable AI
Erico Tjoa
Hong Jing Khok
Tushar Chouhan
G. Cuntai
FAtt
25
4
0
30 Dec 2021
Forward Composition Propagation for Explainable Neural Reasoning
Forward Composition Propagation for Explainable Neural Reasoning
Isel Grau
Gonzalo Nápoles
M. Bello
Yamisleydi Salgueiro
A. Jastrzębska
22
0
0
23 Dec 2021
Explainable Artificial Intelligence Methods in Combating Pandemics: A
  Systematic Review
Explainable Artificial Intelligence Methods in Combating Pandemics: A Systematic Review
F. Giuste
Wenqi Shi
Yuanda Zhu
Tarun Naren
Monica Isgut
Ying Sha
L. Tong
Mitali S. Gupte
May D. Wang
24
73
0
23 Dec 2021
More Than Words: Towards Better Quality Interpretations of Text
  Classifiers
More Than Words: Towards Better Quality Interpretations of Text Classifiers
Muhammad Bilal Zafar
Philipp Schmidt
Michele Donini
Cédric Archambeau
F. Biessmann
Sanjiv Ranjan Das
K. Kenthapadi
FAtt
12
5
0
23 Dec 2021
Toward Explainable AI for Regression Models
Toward Explainable AI for Regression Models
S. Letzgus
Patrick Wagner
Jonas Lederer
Wojciech Samek
Klaus-Robert Muller
G. Montavon
XAI
30
63
0
21 Dec 2021
RELAX: Representation Learning Explainability
RELAX: Representation Learning Explainability
Kristoffer Wickstrøm
Daniel J. Trosten
Sigurd Løkse
Ahcène Boubekki
Karl Øyvind Mikalsen
Michael C. Kampffmeyer
Robert Jenssen
FAtt
13
14
0
19 Dec 2021
Global explainability in aligned image modalities
Global explainability in aligned image modalities
Justin Engelmann
Amos Storkey
Miguel O. Bernabeu
FAtt
30
4
0
17 Dec 2021
Evaluating saliency methods on artificial data with different background
  types
Evaluating saliency methods on artificial data with different background types
Céline Budding
Fabian Eitel
K. Ritter
Stefan Haufe
XAI
FAtt
MedIm
27
5
0
09 Dec 2021
What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation
  Framework for Explainability Methods
What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods
Julien Colin
Thomas Fel
Rémi Cadène
Thomas Serre
33
101
0
06 Dec 2021
Improving Deep Learning Interpretability by Saliency Guided Training
Improving Deep Learning Interpretability by Saliency Guided Training
Aya Abdelsalam Ismail
H. C. Bravo
S. Feizi
FAtt
20
80
0
29 Nov 2021
Fed2: Feature-Aligned Federated Learning
Fed2: Feature-Aligned Federated Learning
Fuxun Yu
Weishan Zhang
Zhuwei Qin
Zirui Xu
Di Wang
Chenchen Liu
Zhi Tian
Xiang Chen
FedML
28
74
0
28 Nov 2021
Scrutinizing XAI using linear ground-truth data with suppressor
  variables
Scrutinizing XAI using linear ground-truth data with suppressor variables
Rick Wilming
Céline Budding
K. Müller
Stefan Haufe
FAtt
16
26
0
14 Nov 2021
A Robust Unsupervised Ensemble of Feature-Based Explanations using
  Restricted Boltzmann Machines
A Robust Unsupervised Ensemble of Feature-Based Explanations using Restricted Boltzmann Machines
V. Borisov
Johannes Meier
J. V. D. Heuvel
Hamed Jalali
Gjergji Kasneci
FAtt
39
5
0
14 Nov 2021
Look at the Variance! Efficient Black-box Explanations with Sobol-based
  Sensitivity Analysis
Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis
Thomas Fel
Rémi Cadène
Mathieu Chalvidal
Matthieu Cord
David Vigouroux
Thomas Serre
MLAU
FAtt
AAML
114
58
0
07 Nov 2021
Transparency of Deep Neural Networks for Medical Image Analysis: A
  Review of Interpretability Methods
Transparency of Deep Neural Networks for Medical Image Analysis: A Review of Interpretability Methods
Zohaib Salahuddin
Henry C. Woodruff
A. Chatterjee
Philippe Lambin
18
302
0
01 Nov 2021
Revisiting Sanity Checks for Saliency Maps
Revisiting Sanity Checks for Saliency Maps
G. Yona
D. Greenfeld
AAML
FAtt
27
25
0
27 Oct 2021
Evaluating the Faithfulness of Importance Measures in NLP by Recursively
  Masking Allegedly Important Tokens and Retraining
Evaluating the Faithfulness of Importance Measures in NLP by Recursively Masking Allegedly Important Tokens and Retraining
Andreas Madsen
Nicholas Meade
Vaibhav Adlakha
Siva Reddy
111
35
0
15 Oct 2021
Previous
123456...91011
Next