Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1711.00867
Cited By
The (Un)reliability of saliency methods
2 November 2017
Pieter-Jan Kindermans
Sara Hooker
Julius Adebayo
Maximilian Alber
Kristof T. Schütt
Sven Dähne
D. Erhan
Been Kim
FAtt
XAI
Re-assign community
ArXiv
PDF
HTML
Papers citing
"The (Un)reliability of saliency methods"
19 / 119 papers shown
Title
Visual Interaction with Deep Learning Models through Collaborative Semantic Inference
Sebastian Gehrmann
Hendrik Strobelt
Robert Krüger
Hanspeter Pfister
Alexander M. Rush
HAI
15
57
0
24 Jul 2019
Incorporating Priors with Feature Attribution on Text Classification
Frederick Liu
Besim Avci
FAtt
FaML
28
120
0
19 Jun 2019
Adversarial Robustness as a Prior for Learned Representations
Logan Engstrom
Andrew Ilyas
Shibani Santurkar
Dimitris Tsipras
Brandon Tran
A. Madry
OOD
AAML
11
63
0
03 Jun 2019
Certifiably Robust Interpretation in Deep Learning
Alexander Levine
Sahil Singla
S. Feizi
FAtt
AAML
17
63
0
28 May 2019
What Do Adversarially Robust Models Look At?
Takahiro Itazuri
Yoshihiro Fukuhara
Hirokatsu Kataoka
Shigeo Morishima
16
5
0
19 May 2019
Detecting inter-sectional accuracy differences in driver drowsiness detection algorithms
Mkhuseli Ngxande
J. Tapamo
Michael G. Burke
17
12
0
23 Apr 2019
Software and application patterns for explanation methods
Maximilian Alber
25
11
0
09 Apr 2019
Regression Concept Vectors for Bidirectional Explanations in Histopathology
Mara Graziani
Vincent Andrearczyk
Henning Muller
31
78
0
09 Apr 2019
Explaining Anomalies Detected by Autoencoders Using SHAP
Liat Antwarg
Ronnie Mindlin Miller
Bracha Shapira
L. Rokach
FAtt
TDI
11
86
0
06 Mar 2019
Interpretable Deep Learning under Fire
Xinyang Zhang
Ningfei Wang
Hua Shen
S. Ji
Xiapu Luo
Ting Wang
AAML
AI4CE
13
168
0
03 Dec 2018
An Overview of Computational Approaches for Interpretation Analysis
Philipp Blandfort
Jörn Hees
D. Patton
21
2
0
09 Nov 2018
What made you do this? Understanding black-box decisions with sufficient input subsets
Brandon Carter
Jonas W. Mueller
Siddhartha Jain
David K Gifford
FAtt
35
77
0
09 Oct 2018
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAtt
AAML
XAI
35
1,926
0
08 Oct 2018
xGEMs: Generating Examplars to Explain Black-Box Models
Shalmali Joshi
Oluwasanmi Koyejo
Been Kim
Joydeep Ghosh
MLAU
22
40
0
22 Jun 2018
On the Robustness of Interpretability Methods
David Alvarez-Melis
Tommi Jaakkola
19
521
0
21 Jun 2018
Towards Robust Interpretability with Self-Explaining Neural Networks
David Alvarez-Melis
Tommi Jaakkola
MILM
XAI
15
932
0
20 Jun 2018
Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges
Gabrielle Ras
Marcel van Gerven
W. Haselager
XAI
17
217
0
20 Mar 2018
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
38
1,789
0
30 Nov 2017
Deep Learning Techniques for Music Generation -- A Survey
Jean-Pierre Briot
Gaëtan Hadjeres
F. Pachet
MGen
32
297
0
05 Sep 2017
Previous
1
2
3