Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1711.00867
Cited By
The (Un)reliability of saliency methods
2 November 2017
Pieter-Jan Kindermans
Sara Hooker
Julius Adebayo
Maximilian Alber
Kristof T. Schütt
Sven Dähne
D. Erhan
Been Kim
FAtt
XAI
Re-assign community
ArXiv
PDF
HTML
Papers citing
"The (Un)reliability of saliency methods"
50 / 119 papers shown
Title
A model-agnostic approach for generating Saliency Maps to explain inferred decisions of Deep Learning Models
S. Karatsiolis
A. Kamilaris
FAtt
29
1
0
19 Sep 2022
Explainable AI for clinical and remote health applications: a survey on tabular and time series data
Flavio Di Martino
Franca Delmastro
AI4TS
26
91
0
14 Sep 2022
Concept-Based Techniques for "Musicologist-friendly" Explanations in a Deep Music Classifier
Francesco Foscarin
Katharina Hoedt
Verena Praher
A. Flexer
Gerhard Widmer
21
11
0
26 Aug 2022
HetVis: A Visual Analysis Approach for Identifying Data Heterogeneity in Horizontal Federated Learning
Xumeng Wang
Wei-Neng Chen
Jiazhi Xia
Zhen Wen
Rongchen Zhu
Tobias Schreck
FedML
26
20
0
16 Aug 2022
Explainable AI Algorithms for Vibration Data-based Fault Detection: Use Case-adadpted Methods and Critical Evaluation
Oliver Mey
Deniz Neufeld
17
21
0
21 Jul 2022
Towards ML Methods for Biodiversity: A Novel Wild Bee Dataset and Evaluations of XAI Methods for ML-Assisted Rare Species Annotations
Teodor Chiaburu
F. Biessmann
Frank Haußer
30
2
0
15 Jun 2022
Multi-Objective Hyperparameter Optimization in Machine Learning -- An Overview
Florian Karl
Tobias Pielok
Julia Moosbauer
Florian Pfisterer
Stefan Coors
...
Jakob Richter
Michel Lang
Eduardo C. Garrido-Merchán
Juergen Branke
B. Bischl
AI4CE
26
56
0
15 Jun 2022
Attribution-based Explanations that Provide Recourse Cannot be Robust
H. Fokkema
R. D. Heide
T. Erven
FAtt
44
18
0
31 May 2022
Backdooring Explainable Machine Learning
Maximilian Noppel
Lukas Peter
Christian Wressnegger
AAML
14
5
0
20 Apr 2022
Maximum Entropy Baseline for Integrated Gradients
Hanxiao Tan
FAtt
16
4
0
12 Apr 2022
XAI in the context of Predictive Process Monitoring: Too much to Reveal
Ghada Elkhawaga
Mervat Abuelkheir
M. Reichert
12
1
0
16 Feb 2022
Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis
Thomas Fel
Mélanie Ducoffe
David Vigouroux
Rémi Cadène
Mikael Capelle
C. Nicodeme
Thomas Serre
AAML
23
41
0
15 Feb 2022
Multi-Modal Knowledge Graph Construction and Application: A Survey
Xiangru Zhu
Zhixu Li
Xiaodan Wang
Xueyao Jiang
Penglei Sun
Xuwu Wang
Yanghua Xiao
N. Yuan
28
154
0
11 Feb 2022
Investigating the fidelity of explainable artificial intelligence methods for applications of convolutional neural networks in geoscience
Antonios Mamalakis
E. Barnes
I. Ebert‐Uphoff
19
73
0
07 Feb 2022
Visualizing Automatic Speech Recognition -- Means for a Better Understanding?
Karla Markert
Romain Parracone
Mykhailo Kulakov
Philip Sperl
Ching-yu Kao
Konstantin Böttinger
11
8
0
01 Feb 2022
PCACE: A Statistical Approach to Ranking Neurons for CNN Interpretability
Sílvia Casacuberta
Esra Suel
Seth Flaxman
FAtt
16
1
0
31 Dec 2021
Generating Fluent Fact Checking Explanations with Unsupervised Post-Editing
Shailza Jolly
Pepa Atanasova
Isabelle Augenstein
28
13
0
13 Dec 2021
Improving Deep Learning Interpretability by Saliency Guided Training
Aya Abdelsalam Ismail
H. C. Bravo
S. Feizi
FAtt
18
79
0
29 Nov 2021
Evaluation of Interpretability for Deep Learning algorithms in EEG Emotion Recognition: A case study in Autism
J. M. M. Torres
Sara E. Medina-DeVilliers
T. Clarkson
M. Lerner
Giuseppe Riccardi
22
34
0
25 Nov 2021
Self-Interpretable Model with TransformationEquivariant Interpretation
Yipei Wang
Xiaoqian Wang
29
23
0
09 Nov 2021
Defense Against Explanation Manipulation
Ruixiang Tang
Ninghao Liu
Fan Yang
Na Zou
Xia Hu
AAML
39
11
0
08 Nov 2021
A Survey on the Robustness of Feature Importance and Counterfactual Explanations
Saumitra Mishra
Sanghamitra Dutta
Jason Long
Daniele Magazzeni
AAML
9
58
0
30 Oct 2021
Evaluating the Faithfulness of Importance Measures in NLP by Recursively Masking Allegedly Important Tokens and Retraining
Andreas Madsen
Nicholas Meade
Vaibhav Adlakha
Siva Reddy
96
35
0
15 Oct 2021
Self-explaining Neural Network with Concept-based Explanations for ICU Mortality Prediction
Sayantan Kumar
Sean C. Yu
Thomas Kannampallil
Zachary B. Abrams
Andrew Michelson
Philip R. O. Payne
FAtt
9
7
0
09 Oct 2021
Consistent Explanations by Contrastive Learning
Vipin Pillai
Soroush Abbasi Koohpayegani
Ashley Ouligian
Dennis Fong
Hamed Pirsiavash
FAtt
18
21
0
01 Oct 2021
GCExplainer: Human-in-the-Loop Concept-based Explanations for Graph Neural Networks
Lucie Charlotte Magister
Dmitry Kazhdan
Vikash Singh
Pietro Lió
27
48
0
25 Jul 2021
CAMERAS: Enhanced Resolution And Sanity preserving Class Activation Mapping for image saliency
M. Jalwana
Naveed Akhtar
Bennamoun
Ajmal Saeed Mian
22
54
0
20 Jun 2021
Zorro: Valid, Sparse, and Stable Explanations in Graph Neural Networks
Thorben Funke
Megha Khosla
Mandeep Rathee
Avishek Anand
FAtt
21
38
0
18 May 2021
Towards Rigorous Interpretations: a Formalisation of Feature Attribution
Darius Afchar
Romain Hennequin
Vincent Guigue
FAtt
31
20
0
26 Apr 2021
EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: the MonuMAI cultural heritage use case
Natalia Díaz Rodríguez
Alberto Lamas
Jules Sanchez
Gianni Franchi
Ivan Donadello
S. Tabik
David Filliat
P. Cruz
Rosana Montes
Francisco Herrera
47
77
0
24 Apr 2021
Neural Network Attribution Methods for Problems in Geoscience: A Novel Synthetic Benchmark Dataset
Antonios Mamalakis
I. Ebert‐Uphoff
E. Barnes
OOD
23
75
0
18 Mar 2021
Do Input Gradients Highlight Discriminative Features?
Harshay Shah
Prateek Jain
Praneeth Netrapalli
AAML
FAtt
21
57
0
25 Feb 2021
Connecting Interpretability and Robustness in Decision Trees through Separation
Michal Moshkovitz
Yao-Yuan Yang
Kamalika Chaudhuri
25
22
0
14 Feb 2021
Neural Prototype Trees for Interpretable Fine-grained Image Recognition
Meike Nauta
Ron van Bree
C. Seifert
68
261
0
03 Dec 2020
Reflective-Net: Learning from Explanations
Johannes Schneider
Michalis Vlachos
FAtt
OffRL
LRM
54
18
0
27 Nov 2020
Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization
Judy Borowski
Roland S. Zimmermann
Judith Schepers
Robert Geirhos
Thomas S. A. Wallis
Matthias Bethge
Wieland Brendel
FAtt
36
7
0
23 Oct 2020
Optimism in the Face of Adversity: Understanding and Improving Deep Learning through Adversarial Robustness
Guillermo Ortiz-Jiménez
Apostolos Modas
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
AAML
29
48
0
19 Oct 2020
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors
Yi-Shan Lin
Wen-Chuan Lee
Z. Berkay Celik
XAI
24
93
0
22 Sep 2020
A simple defense against adversarial attacks on heatmap explanations
Laura Rieger
Lars Kai Hansen
FAtt
AAML
25
37
0
13 Jul 2020
Evolved Explainable Classifications for Lymph Node Metastases
Iam Palatnik de Sousa
M. Vellasco
E. C. Silva
14
6
0
14 May 2020
Evaluating and Aggregating Feature-based Model Explanations
Umang Bhatt
Adrian Weller
J. M. F. Moura
XAI
28
218
0
01 May 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
29
371
0
30 Apr 2020
A Survey of Deep Learning for Scientific Discovery
M. Raghu
Erica Schmidt
OOD
AI4CE
35
120
0
26 Mar 2020
Measuring and improving the quality of visual explanations
Agnieszka Grabska-Barwiñska
XAI
FAtt
8
3
0
14 Mar 2020
Explaining Explanations: Axiomatic Feature Interactions for Deep Networks
Joseph D. Janizek
Pascal Sturmfels
Su-In Lee
FAtt
24
143
0
10 Feb 2020
SAUNet: Shape Attentive U-Net for Interpretable Medical Image Segmentation
Jesse Sun
Fatemeh Darbeha
M. Zaidi
Bo Wang
AAML
9
110
0
21 Jan 2020
CXPlain: Causal Explanations for Model Interpretation under Uncertainty
Patrick Schwab
W. Karlen
FAtt
CML
29
205
0
27 Oct 2019
Seeing What a GAN Cannot Generate
David Bau
Jun-Yan Zhu
Jonas Wulff
William S. Peebles
Hendrik Strobelt
Bolei Zhou
Antonio Torralba
GAN
24
307
0
24 Oct 2019
Deep Weakly-Supervised Learning Methods for Classification and Localization in Histology Images: A Survey
Jérôme Rony
Soufiane Belharbi
Jose Dolz
Ismail Ben Ayed
Luke McCaffrey
Eric Granger
25
70
0
08 Sep 2019
Saccader: Improving Accuracy of Hard Attention Models for Vision
Gamaleldin F. Elsayed
Simon Kornblith
Quoc V. Le
VLM
25
70
0
20 Aug 2019
Previous
1
2
3
Next