Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1509.06321
Cited By
Evaluating the visualization of what a Deep Neural Network has learned
21 September 2015
Wojciech Samek
Alexander Binder
G. Montavon
Sebastian Lapuschkin
K. Müller
XAI
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Evaluating the visualization of what a Deep Neural Network has learned"
50 / 511 papers shown
Title
Interpretable Neural Networks with Frank-Wolfe: Sparse Relevance Maps and Relevance Orderings
Jan Macdonald
Mathieu Besançon
Sebastian Pokutta
32
11
0
15 Oct 2021
Fine-Grained Neural Network Explanation by Identifying Input Features with Predictive Information
Yang Zhang
Ashkan Khakzar
Yawei Li
Azade Farshad
Seong Tae Kim
Nassir Navab
FAtt
XAI
57
27
0
04 Oct 2021
Discriminative Attribution from Counterfactuals
N. Eckstein
A. S. Bates
G. Jefferis
Jan Funke
FAtt
CML
27
1
0
28 Sep 2021
Awakening Latent Grounding from Pretrained Language Models for Semantic Parsing
Qian Liu
Dejian Yang
Jiahui Zhang
Jiaqi Guo
Bin Zhou
Jian-Guang Lou
51
41
0
22 Sep 2021
FUTURE-AI: Guiding Principles and Consensus Recommendations for Trustworthy Artificial Intelligence in Medical Imaging
Karim Lekadira
Richard Osuala
C. Gallin
Noussair Lazrak
Kaisar Kushibar
...
Nickolas Papanikolaou
Zohaib Salahuddin
Henry C. Woodruff
Philippe Lambin
L. Martí-Bonmatí
AI4TS
71
56
0
20 Sep 2021
Detection Accuracy for Evaluating Compositional Explanations of Units
Sayo M. Makinwa
Biagio La Rosa
Roberto Capobianco
FAtt
CoGe
41
1
0
16 Sep 2021
Logic Traps in Evaluating Attribution Scores
Yiming Ju
Yuanzhe Zhang
Zhao Yang
Zhongtao Jiang
Kang Liu
Jun Zhao
XAI
FAtt
25
18
0
12 Sep 2021
Cross-Model Consensus of Explanations and Beyond for Image Classification Models: An Empirical Study
Xuhong Li
Haoyi Xiong
Siyu Huang
Shilei Ji
Dejing Dou
22
10
0
02 Sep 2021
Towards Improving Adversarial Training of NLP Models
Jin Yong Yoo
Yanjun Qi
AAML
6
123
0
01 Sep 2021
Towards Learning a Vocabulary of Visual Concepts and Operators using Deep Neural Networks
Sunil Kumar Vengalil
N. Sinha
11
0
0
01 Sep 2021
Spatio-Temporal Perturbations for Video Attribution
Zhenqiang Li
Weimin Wang
Zuoyue Li
Yifei Huang
Yoichi Sato
16
6
0
01 Sep 2021
Calibrating Class Activation Maps for Long-Tailed Visual Recognition
Chi Zhang
Guosheng Lin
Lvlong Lai
Henghui Ding
Qingyao Wu
29
1
0
29 Aug 2021
Explaining Bayesian Neural Networks
Kirill Bykov
Marina M.-C. Höhne
Adelaida Creosteanu
Klaus-Robert Muller
Frederick Klauschen
Shinichi Nakajima
Marius Kloft
BDL
AAML
34
25
0
23 Aug 2021
Challenges for cognitive decoding using deep learning methods
A. Thomas
Christopher Ré
R. Poldrack
AI4CE
24
6
0
16 Aug 2021
Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
Roman Levin
Manli Shu
Eitan Borgnia
Furong Huang
Micah Goldblum
Tom Goldstein
FAtt
AAML
25
10
0
03 Aug 2021
Surrogate Model-Based Explainability Methods for Point Cloud NNs
Hanxiao Tan
Helena Kotthaus
3DPC
23
26
0
28 Jul 2021
Normalization Matters in Weakly Supervised Object Localization
Jeesoo Kim
Junsuk Choe
Sangdoo Yun
Nojun Kwak
WSOL
34
41
0
28 Jul 2021
Robust Explainability: A Tutorial on Gradient-Based Attribution Methods for Deep Neural Networks
Ian E. Nielsen
Dimah Dera
Ghulam Rasool
N. Bouaynaya
R. Ramachandran
FAtt
16
79
0
23 Jul 2021
Quality Metrics for Transparent Machine Learning With and Without Humans In the Loop Are Not Correlated
F. Biessmann
D. Refiano
10
10
0
01 Jul 2021
Towards Measuring Bias in Image Classification
Nina Schaaf
Omar de Mitri
Hang Beom Kim
A. Windberger
Marco F. Huber
SSL
34
10
0
01 Jul 2021
Crowdsourcing Evaluation of Saliency-based XAI Methods
Xiaotian Lu
A. Tolmachev
Tatsuya Yamamoto
Koh Takeuchi
Seiji Okajima
T. Takebayashi
Koji Maruhashi
H. Kashima
XAI
FAtt
8
14
0
27 Jun 2021
On the Robustness of Pretraining and Self-Supervision for a Deep Learning-based Analysis of Diabetic Retinopathy
Vignesh Srinivasan
Nils Strodthoff
Jackie Ma
Alexander Binder
Klaus-Robert Muller
Wojciech Samek
OOD
20
6
0
25 Jun 2021
Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy
Christopher J. Anders
David Neumann
Wojciech Samek
K. Müller
Sebastian Lapuschkin
29
64
0
24 Jun 2021
Synthetic Benchmarks for Scientific Research in Explainable Machine Learning
Yang Liu
Sujay Khandagale
Colin White
W. Neiswanger
37
65
0
23 Jun 2021
Reachability Analysis of Convolutional Neural Networks
Xiaodong Yang
Tomoya Yamaguchi
Hoang-Dung Tran
Bardh Hoxha
Taylor T. Johnson
Danil Prokhorov
FAtt
13
5
0
22 Jun 2021
NoiseGrad: Enhancing Explanations by Introducing Stochasticity to Model Weights
Kirill Bykov
Anna Hedström
Shinichi Nakajima
Marina M.-C. Höhne
FAtt
17
34
0
18 Jun 2021
Developing a Fidelity Evaluation Approach for Interpretable Machine Learning
M. Velmurugan
Chun Ouyang
Catarina Moreira
Renuka Sindhgatta
XAI
27
16
0
16 Jun 2021
Keep CALM and Improve Visual Feature Attribution
Jae Myung Kim
Junsuk Choe
Zeynep Akata
Seong Joon Oh
FAtt
342
20
0
15 Jun 2021
Exploiting auto-encoders and segmentation methods for middle-level explanations of image classification systems
Andrea Apicella
Salvatore Giugliano
Francesco Isgrò
R. Prevete
11
18
0
09 Jun 2021
Taxonomy of Machine Learning Safety: A Survey and Primer
Sina Mohseni
Haotao Wang
Zhiding Yu
Chaowei Xiao
Zhangyang Wang
J. Yadawa
21
31
0
09 Jun 2021
On the Lack of Robust Interpretability of Neural Text Classifiers
Muhammad Bilal Zafar
Michele Donini
Dylan Slack
Cédric Archambeau
Sanjiv Ranjan Das
K. Kenthapadi
AAML
11
21
0
08 Jun 2021
Topological Measurement of Deep Neural Networks Using Persistent Homology
Satoru Watanabe
Hayato Yamana
17
16
0
06 Jun 2021
Evaluating Local Explanations using White-box Models
Amir Hossein Akhavan Rahnama
Judith Butepage
Pierre Geurts
Henrik Bostrom
FAtt
27
0
0
04 Jun 2021
To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods
E. Amparore
Alan Perotti
P. Bajardi
FAtt
17
68
0
01 Jun 2021
Balancing Robustness and Sensitivity using Feature Contrastive Learning
Seungyeon Kim
Daniel Glasner
Srikumar Ramalingam
Cho-Jui Hsieh
Kishore Papineni
Sanjiv Kumar
25
1
0
19 May 2021
Do Feature Attribution Methods Correctly Attribute Features?
Yilun Zhou
Serena Booth
Marco Tulio Ribeiro
J. Shah
FAtt
XAI
33
132
0
27 Apr 2021
Improving Attribution Methods by Learning Submodular Functions
Piyushi Manupriya
Tarun Ram Menta
S. Jagarlapudi
V. Balasubramanian
TDI
22
6
0
19 Apr 2021
Flexible Instance-Specific Rationalization of NLP Models
G. Chrysostomou
Nikolaos Aletras
31
14
0
16 Apr 2021
Mutual Information Preserving Back-propagation: Learn to Invert for Faithful Attribution
Huiqi Deng
Na Zou
Weifu Chen
Guo-Can Feng
Mengnan Du
Xia Hu
FAtt
26
6
0
14 Apr 2021
Enhancing Deep Neural Network Saliency Visualizations with Gradual Extrapolation
Tomasz Szandała
FAtt
14
4
0
11 Apr 2021
Explaining Neural Network Predictions on Sentence Pairs via Learning Word-Group Masks
Hanjie Chen
Song Feng
Jatin Ganhotra
H. Wan
Chulaka Gunasekara
Sachindra Joshi
Yangfeng Ji
13
17
0
09 Apr 2021
Robust Semantic Interpretability: Revisiting Concept Activation Vectors
J. Pfau
A. Young
Jerome Wei
Maria L. Wei
Michael J. Keiser
FAtt
33
14
0
06 Apr 2021
White Box Methods for Explanations of Convolutional Neural Networks in Image Classification Tasks
Meghna P. Ayyar
J. Benois-Pineau
A. Zemmari
FAtt
14
17
0
06 Apr 2021
Evaluating explainable artificial intelligence methods for multi-label deep learning classification tasks in remote sensing
Ioannis Kakogeorgiou
Konstantinos Karantzalos
XAI
23
118
0
03 Apr 2021
Explaining COVID-19 and Thoracic Pathology Model Predictions by Identifying Informative Input Features
Ashkan Khakzar
Yang Zhang
W. Mansour
Yuezhi Cai
Yawei Li
Yucheng Zhang
Seong Tae Kim
Nassir Navab
FAtt
49
17
0
01 Apr 2021
Neural Response Interpretation through the Lens of Critical Pathways
Ashkan Khakzar
Soroosh Baselizadeh
Saurabh Khanduja
Christian Rupprecht
Seong Tae Kim
Nassir Navab
29
32
0
31 Mar 2021
Group-CAM: Group Score-Weighted Visual Explanations for Deep Convolutional Networks
Qing-Long Zhang
Lu Rao
Yubin Yang
16
58
0
25 Mar 2021
ECINN: Efficient Counterfactuals from Invertible Neural Networks
Frederik Hvilshoj
Alexandros Iosifidis
Ira Assent
BDL
24
26
0
25 Mar 2021
Robust Models Are More Interpretable Because Attributions Look Normal
Zifan Wang
Matt Fredrikson
Anupam Datta
OOD
FAtt
35
25
0
20 Mar 2021
Interpretable Deep Learning: Interpretation, Interpretability, Trustworthiness, and Beyond
Xuhong Li
Haoyi Xiong
Xingjian Li
Xuanyu Wu
Xiao Zhang
Ji Liu
Jiang Bian
Dejing Dou
AAML
FaML
XAI
HAI
23
317
0
19 Mar 2021
Previous
1
2
3
...
5
6
7
...
9
10
11
Next