ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.00867
  4. Cited By
The (Un)reliability of saliency methods

The (Un)reliability of saliency methods

2 November 2017
Pieter-Jan Kindermans
Sara Hooker
Julius Adebayo
Maximilian Alber
Kristof T. Schütt
Sven Dähne
D. Erhan
Been Kim
    FAtt
    XAI
ArXivPDFHTML

Papers citing "The (Un)reliability of saliency methods"

19 / 119 papers shown
Title
Visual Interaction with Deep Learning Models through Collaborative
  Semantic Inference
Visual Interaction with Deep Learning Models through Collaborative Semantic Inference
Sebastian Gehrmann
Hendrik Strobelt
Robert Krüger
Hanspeter Pfister
Alexander M. Rush
HAI
15
57
0
24 Jul 2019
Incorporating Priors with Feature Attribution on Text Classification
Incorporating Priors with Feature Attribution on Text Classification
Frederick Liu
Besim Avci
FAtt
FaML
28
120
0
19 Jun 2019
Adversarial Robustness as a Prior for Learned Representations
Adversarial Robustness as a Prior for Learned Representations
Logan Engstrom
Andrew Ilyas
Shibani Santurkar
Dimitris Tsipras
Brandon Tran
A. Madry
OOD
AAML
11
63
0
03 Jun 2019
Certifiably Robust Interpretation in Deep Learning
Certifiably Robust Interpretation in Deep Learning
Alexander Levine
Sahil Singla
S. Feizi
FAtt
AAML
17
63
0
28 May 2019
What Do Adversarially Robust Models Look At?
What Do Adversarially Robust Models Look At?
Takahiro Itazuri
Yoshihiro Fukuhara
Hirokatsu Kataoka
Shigeo Morishima
16
5
0
19 May 2019
Detecting inter-sectional accuracy differences in driver drowsiness
  detection algorithms
Detecting inter-sectional accuracy differences in driver drowsiness detection algorithms
Mkhuseli Ngxande
J. Tapamo
Michael G. Burke
17
12
0
23 Apr 2019
Software and application patterns for explanation methods
Software and application patterns for explanation methods
Maximilian Alber
25
11
0
09 Apr 2019
Regression Concept Vectors for Bidirectional Explanations in
  Histopathology
Regression Concept Vectors for Bidirectional Explanations in Histopathology
Mara Graziani
Vincent Andrearczyk
Henning Muller
31
78
0
09 Apr 2019
Explaining Anomalies Detected by Autoencoders Using SHAP
Explaining Anomalies Detected by Autoencoders Using SHAP
Liat Antwarg
Ronnie Mindlin Miller
Bracha Shapira
L. Rokach
FAtt
TDI
11
86
0
06 Mar 2019
Interpretable Deep Learning under Fire
Interpretable Deep Learning under Fire
Xinyang Zhang
Ningfei Wang
Hua Shen
S. Ji
Xiapu Luo
Ting Wang
AAML
AI4CE
13
168
0
03 Dec 2018
An Overview of Computational Approaches for Interpretation Analysis
An Overview of Computational Approaches for Interpretation Analysis
Philipp Blandfort
Jörn Hees
D. Patton
21
2
0
09 Nov 2018
What made you do this? Understanding black-box decisions with sufficient
  input subsets
What made you do this? Understanding black-box decisions with sufficient input subsets
Brandon Carter
Jonas W. Mueller
Siddhartha Jain
David K Gifford
FAtt
35
77
0
09 Oct 2018
Sanity Checks for Saliency Maps
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAtt
AAML
XAI
35
1,926
0
08 Oct 2018
xGEMs: Generating Examplars to Explain Black-Box Models
xGEMs: Generating Examplars to Explain Black-Box Models
Shalmali Joshi
Oluwasanmi Koyejo
Been Kim
Joydeep Ghosh
MLAU
22
40
0
22 Jun 2018
On the Robustness of Interpretability Methods
On the Robustness of Interpretability Methods
David Alvarez-Melis
Tommi Jaakkola
19
521
0
21 Jun 2018
Towards Robust Interpretability with Self-Explaining Neural Networks
Towards Robust Interpretability with Self-Explaining Neural Networks
David Alvarez-Melis
Tommi Jaakkola
MILM
XAI
15
932
0
20 Jun 2018
Explanation Methods in Deep Learning: Users, Values, Concerns and
  Challenges
Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges
Gabrielle Ras
Marcel van Gerven
W. Haselager
XAI
17
217
0
20 Mar 2018
Interpretability Beyond Feature Attribution: Quantitative Testing with
  Concept Activation Vectors (TCAV)
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
38
1,789
0
30 Nov 2017
Deep Learning Techniques for Music Generation -- A Survey
Deep Learning Techniques for Music Generation -- A Survey
Jean-Pierre Briot
Gaëtan Hadjeres
F. Pachet
MGen
32
297
0
05 Sep 2017
Previous
123