ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.03292
  4. Cited By
Sanity Checks for Saliency Maps

Sanity Checks for Saliency Maps

8 October 2018
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
    FAtt
    AAML
    XAI
ArXivPDFHTML

Papers citing "Sanity Checks for Saliency Maps"

50 / 302 papers shown
Title
Learning Propagation Rules for Attribution Map Generation
Learning Propagation Rules for Attribution Map Generation
Yiding Yang
Jiayan Qiu
Mingli Song
Dacheng Tao
Xinchao Wang
FAtt
30
17
0
14 Oct 2020
Visualizing Color-wise Saliency of Black-Box Image Classification Models
Visualizing Color-wise Saliency of Black-Box Image Classification Models
Yuhki Hatakeyama
Hiroki Sakuma
Yoshinori Konishi
Kohei Suenaga
FAtt
14
3
0
06 Oct 2020
Remembering for the Right Reasons: Explanations Reduce Catastrophic
  Forgetting
Remembering for the Right Reasons: Explanations Reduce Catastrophic Forgetting
Sayna Ebrahimi
Suzanne Petryk
Akash Gokul
William Gan
Joseph E. Gonzalez
Marcus Rohrbach
Trevor Darrell
CLL
16
45
0
04 Oct 2020
Explaining Deep Neural Networks
Explaining Deep Neural Networks
Oana-Maria Camburu
XAI
FAtt
20
26
0
04 Oct 2020
Trustworthy Convolutional Neural Networks: A Gradient Penalized-based
  Approach
Trustworthy Convolutional Neural Networks: A Gradient Penalized-based Approach
Nicholas F Halliwell
Freddy Lecue
FAtt
17
9
0
29 Sep 2020
Quantitative and Qualitative Evaluation of Explainable Deep Learning
  Methods for Ophthalmic Diagnosis
Quantitative and Qualitative Evaluation of Explainable Deep Learning Methods for Ophthalmic Diagnosis
Amitojdeep Singh
J. Balaji
M. Rasheed
Varadharajan Jayakumar
R. Raman
Vasudevan Lakshminarayanan
BDL
XAI
FAtt
4
29
0
26 Sep 2020
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI)
  Interpretability through Neural Backdoors
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors
Yi-Shan Lin
Wen-Chuan Lee
Z. Berkay Celik
XAI
24
93
0
22 Sep 2020
Captum: A unified and generic model interpretability library for PyTorch
Captum: A unified and generic model interpretability library for PyTorch
Narine Kokhlikyan
Vivek Miglani
Miguel Martin
Edward Wang
B. Alsallakh
...
Alexander Melnikov
Natalia Kliushkina
Carlos Araya
Siqi Yan
Orion Reblitz-Richardson
FAtt
17
821
0
16 Sep 2020
Model extraction from counterfactual explanations
Model extraction from counterfactual explanations
Ulrich Aivodji
Alexandre Bolot
Sébastien Gambs
MIACV
MLAU
25
51
0
03 Sep 2020
Deep Learning in Protein Structural Modeling and Design
Deep Learning in Protein Structural Modeling and Design
Wenhao Gao
S. Mahajan
Jeremias Sulam
Jeffrey J. Gray
21
159
0
16 Jul 2020
A simple defense against adversarial attacks on heatmap explanations
A simple defense against adversarial attacks on heatmap explanations
Laura Rieger
Lars Kai Hansen
FAtt
AAML
25
37
0
13 Jul 2020
Drug discovery with explainable artificial intelligence
Drug discovery with explainable artificial intelligence
José Jiménez-Luna
F. Grisoni
G. Schneider
25
625
0
01 Jul 2020
Adversarial Infidelity Learning for Model Interpretation
Adversarial Infidelity Learning for Model Interpretation
Jian Liang
Bing Bai
Yuren Cao
Kun Bai
Fei-Yue Wang
AAML
38
18
0
09 Jun 2020
Higher-Order Explanations of Graph Neural Networks via Relevant Walks
Higher-Order Explanations of Graph Neural Networks via Relevant Walks
Thomas Schnake
Oliver Eberle
Jonas Lederer
Shinichi Nakajima
Kristof T. Schütt
Klaus-Robert Muller
G. Montavon
21
215
0
05 Jun 2020
Local and Global Explanations of Agent Behavior: Integrating Strategy
  Summaries with Saliency Maps
Local and Global Explanations of Agent Behavior: Integrating Strategy Summaries with Saliency Maps
Tobias Huber
Katharina Weitz
Elisabeth André
Ofra Amir
FAtt
16
63
0
18 May 2020
Evaluating and Aggregating Feature-based Model Explanations
Evaluating and Aggregating Feature-based Model Explanations
Umang Bhatt
Adrian Weller
J. M. F. Moura
XAI
28
218
0
01 May 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
29
371
0
30 Apr 2020
Generating Fact Checking Explanations
Generating Fact Checking Explanations
Pepa Atanasova
J. Simonsen
Christina Lioma
Isabelle Augenstein
6
188
0
13 Apr 2020
Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI
Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI
L. Arras
Ahmed Osman
Wojciech Samek
XAI
AAML
21
149
0
16 Mar 2020
Measuring and improving the quality of visual explanations
Measuring and improving the quality of visual explanations
Agnieszka Grabska-Barwiñska
XAI
FAtt
6
3
0
14 Mar 2020
IROF: a low resource evaluation metric for explanation methods
IROF: a low resource evaluation metric for explanation methods
Laura Rieger
Lars Kai Hansen
15
55
0
09 Mar 2020
Causal Interpretability for Machine Learning -- Problems, Methods and
  Evaluation
Causal Interpretability for Machine Learning -- Problems, Methods and Evaluation
Raha Moraffah
Mansooreh Karami
Ruocheng Guo
A. Raglin
Huan Liu
CML
ELM
XAI
18
212
0
09 Mar 2020
Gradient-Adjusted Neuron Activation Profiles for Comprehensive
  Introspection of Convolutional Speech Recognition Models
Gradient-Adjusted Neuron Activation Profiles for Comprehensive Introspection of Convolutional Speech Recognition Models
A. Krug
Sebastian Stober
14
0
0
19 Feb 2020
Explaining Explanations: Axiomatic Feature Interactions for Deep
  Networks
Explaining Explanations: Axiomatic Feature Interactions for Deep Networks
Joseph D. Janizek
Pascal Sturmfels
Su-In Lee
FAtt
22
143
0
10 Feb 2020
Concept Whitening for Interpretable Image Recognition
Concept Whitening for Interpretable Image Recognition
Zhi Chen
Yijie Bei
Cynthia Rudin
FAtt
12
313
0
05 Feb 2020
GraphLIME: Local Interpretable Model Explanations for Graph Neural
  Networks
GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks
Q. Huang
M. Yamada
Yuan Tian
Dinesh Singh
Dawei Yin
Yi-Ju Chang
FAtt
26
344
0
17 Jan 2020
Making deep neural networks right for the right scientific reasons by
  interacting with their explanations
Making deep neural networks right for the right scientific reasons by interacting with their explanations
P. Schramowski
Wolfgang Stammer
Stefano Teso
Anna Brugger
Xiaoting Shao
Hans-Georg Luigs
Anne-Katrin Mahlein
Kristian Kersting
13
207
0
15 Jan 2020
On Interpretability of Artificial Neural Networks: A Survey
On Interpretability of Artificial Neural Networks: A Survey
Fenglei Fan
Jinjun Xiong
Mengzhou Li
Ge Wang
AAML
AI4CE
30
300
0
08 Jan 2020
When Explanations Lie: Why Many Modified BP Attributions Fail
When Explanations Lie: Why Many Modified BP Attributions Fail
Leon Sixt
Maximilian Granz
Tim Landgraf
BDL
FAtt
XAI
11
132
0
20 Dec 2019
On the Explanation of Machine Learning Predictions in Clinical Gait
  Analysis
On the Explanation of Machine Learning Predictions in Clinical Gait Analysis
D. Slijepcevic
Fabian Horst
Sebastian Lapuschkin
Anna-Maria Raberger
Matthias Zeppelzauer
Wojciech Samek
C. Breiteneder
W. Schöllhorn
B. Horsak
22
50
0
16 Dec 2019
CXPlain: Causal Explanations for Model Interpretation under Uncertainty
CXPlain: Causal Explanations for Model Interpretation under Uncertainty
Patrick Schwab
W. Karlen
FAtt
CML
29
205
0
27 Oct 2019
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
S. Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
37
6,109
0
22 Oct 2019
Understanding Deep Networks via Extremal Perturbations and Smooth Masks
Understanding Deep Networks via Extremal Perturbations and Smooth Masks
Ruth C. Fong
Mandela Patrick
Andrea Vedaldi
AAML
22
411
0
18 Oct 2019
Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods
Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods
Oana-Maria Camburu
Eleonora Giunchiglia
Jakob N. Foerster
Thomas Lukasiewicz
Phil Blunsom
FAtt
AAML
21
59
0
04 Oct 2019
Visual Explanation for Deep Metric Learning
Visual Explanation for Deep Metric Learning
Sijie Zhu
Taojiannan Yang
C. L. P. Chen
FAtt
22
33
0
27 Sep 2019
Deep Weakly-Supervised Learning Methods for Classification and
  Localization in Histology Images: A Survey
Deep Weakly-Supervised Learning Methods for Classification and Localization in Histology Images: A Survey
Jérôme Rony
Soufiane Belharbi
Jose Dolz
Ismail Ben Ayed
Luke McCaffrey
Eric Granger
23
70
0
08 Sep 2019
Saccader: Improving Accuracy of Hard Attention Models for Vision
Saccader: Improving Accuracy of Hard Attention Models for Vision
Gamaleldin F. Elsayed
Simon Kornblith
Quoc V. Le
VLM
25
70
0
20 Aug 2019
Visual Interaction with Deep Learning Models through Collaborative
  Semantic Inference
Visual Interaction with Deep Learning Models through Collaborative Semantic Inference
Sebastian Gehrmann
Hendrik Strobelt
Robert Krüger
Hanspeter Pfister
Alexander M. Rush
HAI
13
57
0
24 Jul 2019
AlphaStock: A Buying-Winners-and-Selling-Losers Investment Strategy
  using Interpretable Deep Reinforcement Attention Networks
AlphaStock: A Buying-Winners-and-Selling-Losers Investment Strategy using Interpretable Deep Reinforcement Attention Networks
Jingyuan Wang
Yang Zhang
Ke Tang
Junjie Wu
Zhang Xiong
AIFin
11
119
0
24 Jul 2019
Graph Neural Network for Interpreting Task-fMRI Biomarkers
Graph Neural Network for Interpreting Task-fMRI Biomarkers
Xiaoxiao Li
Nicha Dvornek
Yuan Zhou
Juntang Zhuang
P. Ventola
James S. Duncan
25
101
0
02 Jul 2019
Incorporating Priors with Feature Attribution on Text Classification
Incorporating Priors with Feature Attribution on Text Classification
Frederick Liu
Besim Avci
FAtt
FaML
23
120
0
19 Jun 2019
The Secrets of Machine Learning: Ten Things You Wish You Had Known
  Earlier to be More Effective at Data Analysis
The Secrets of Machine Learning: Ten Things You Wish You Had Known Earlier to be More Effective at Data Analysis
Cynthia Rudin
David Carlson
HAI
9
34
0
04 Jun 2019
Adversarial Robustness as a Prior for Learned Representations
Adversarial Robustness as a Prior for Learned Representations
Logan Engstrom
Andrew Ilyas
Shibani Santurkar
Dimitris Tsipras
Brandon Tran
A. Madry
OOD
AAML
11
63
0
03 Jun 2019
Certifiably Robust Interpretation in Deep Learning
Certifiably Robust Interpretation in Deep Learning
Alexander Levine
Sahil Singla
S. Feizi
FAtt
AAML
13
63
0
28 May 2019
Interpreting Adversarially Trained Convolutional Neural Networks
Interpreting Adversarially Trained Convolutional Neural Networks
Tianyuan Zhang
Zhanxing Zhu
AAML
GAN
FAtt
17
157
0
23 May 2019
What Do Adversarially Robust Models Look At?
What Do Adversarially Robust Models Look At?
Takahiro Itazuri
Yoshihiro Fukuhara
Hirokatsu Kataoka
Shigeo Morishima
11
5
0
19 May 2019
Towards Automatic Concept-based Explanations
Towards Automatic Concept-based Explanations
Amirata Ghorbani
James Wexler
James Zou
Been Kim
FAtt
LRM
22
19
0
07 Feb 2019
Interpretable machine learning: definitions, methods, and applications
Interpretable machine learning: definitions, methods, and applications
W. James Murdoch
Chandan Singh
Karl Kumbier
R. Abbasi-Asl
Bin-Xia Yu
XAI
HAI
21
1,416
0
14 Jan 2019
Context-encoding Variational Autoencoder for Unsupervised Anomaly
  Detection
Context-encoding Variational Autoencoder for Unsupervised Anomaly Detection
David Zimmerer
Simon A. A. Kohl
Jens Petersen
Fabian Isensee
Klaus H. Maier-Hein
DRL
19
128
0
14 Dec 2018
Interpretable Deep Learning under Fire
Interpretable Deep Learning under Fire
Xinyang Zhang
Ningfei Wang
Hua Shen
S. Ji
Xiapu Luo
Ting Wang
AAML
AI4CE
11
168
0
03 Dec 2018
Previous
1234567
Next