ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.03292
  4. Cited By
Sanity Checks for Saliency Maps

Sanity Checks for Saliency Maps

8 October 2018
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
    FAtt
    AAML
    XAI
ArXivPDFHTML

Papers citing "Sanity Checks for Saliency Maps"

50 / 302 papers shown
Title
Why Did This Model Forecast This Future? Closed-Form Temporal Saliency
  Towards Causal Explanations of Probabilistic Forecasts
Why Did This Model Forecast This Future? Closed-Form Temporal Saliency Towards Causal Explanations of Probabilistic Forecasts
Chirag Raman
Hayley Hung
Marco Loog
16
3
0
01 Jun 2022
Attribution-based Explanations that Provide Recourse Cannot be Robust
Attribution-based Explanations that Provide Recourse Cannot be Robust
H. Fokkema
R. D. Heide
T. Erven
FAtt
42
18
0
31 May 2022
Faithful Explanations for Deep Graph Models
Faithful Explanations for Deep Graph Models
Zifan Wang
Yuhang Yao
Chaoran Zhang
Han Zhang
Youjie Kang
Carlee Joe-Wong
Matt Fredrikson
Anupam Datta
FAtt
14
2
0
24 May 2022
What You See is What You Classify: Black Box Attributions
What You See is What You Classify: Black Box Attributions
Steven Stalder
Nathanael Perraudin
R. Achanta
F. Pérez-Cruz
Michele Volpi
FAtt
24
9
0
23 May 2022
B-cos Networks: Alignment is All We Need for Interpretability
B-cos Networks: Alignment is All We Need for Interpretability
Moritz D Boehle
Mario Fritz
Bernt Schiele
31
84
0
20 May 2022
Cardinality-Minimal Explanations for Monotonic Neural Networks
Cardinality-Minimal Explanations for Monotonic Neural Networks
Ouns El Harzli
Bernardo Cuenca Grau
Ian Horrocks
FAtt
30
5
0
19 May 2022
Trustworthy Graph Neural Networks: Aspects, Methods and Trends
Trustworthy Graph Neural Networks: Aspects, Methods and Trends
He Zhang
Bang Wu
Xingliang Yuan
Shirui Pan
Hanghang Tong
Jian Pei
45
104
0
16 May 2022
How Does Frequency Bias Affect the Robustness of Neural Image
  Classifiers against Common Corruption and Adversarial Perturbations?
How Does Frequency Bias Affect the Robustness of Neural Image Classifiers against Common Corruption and Adversarial Perturbations?
Alvin Chan
Yew-Soon Ong
Clement Tan
AAML
22
13
0
09 May 2022
ExSum: From Local Explanations to Model Understanding
ExSum: From Local Explanations to Model Understanding
Yilun Zhou
Marco Tulio Ribeiro
J. Shah
FAtt
LRM
11
25
0
30 Apr 2022
Backdooring Explainable Machine Learning
Backdooring Explainable Machine Learning
Maximilian Noppel
Lukas Peter
Christian Wressnegger
AAML
14
5
0
20 Apr 2022
Interpretability of Machine Learning Methods Applied to Neuroimaging
Interpretability of Machine Learning Methods Applied to Neuroimaging
Elina Thibeau-Sutre
S. Collin
Ninon Burgos
O. Colliot
16
4
0
14 Apr 2022
Visualizing Deep Neural Networks with Topographic Activation Maps
Visualizing Deep Neural Networks with Topographic Activation Maps
A. Krug
Raihan Kabir Ratul
Christopher Olson
Sebastian Stober
FAtt
AI4CE
25
3
0
07 Apr 2022
Predicting and Explaining Mobile UI Tappability with Vision Modeling and
  Saliency Analysis
Predicting and Explaining Mobile UI Tappability with Vision Modeling and Saliency Analysis
E. Schoop
Xin Zhou
Gang Li
Zhourong Chen
Björn Hartmann
Yang Li
HAI
FAtt
29
32
0
05 Apr 2022
Towards Interpretable Deep Reinforcement Learning Models via Inverse
  Reinforcement Learning
Towards Interpretable Deep Reinforcement Learning Models via Inverse Reinforcement Learning
Yuansheng Xie
Soroush Vosoughi
Saeed Hassanpour
19
2
0
30 Mar 2022
Visualizing Global Explanations of Point Cloud DNNs
Visualizing Global Explanations of Point Cloud DNNs
Hanxiao Tan
3DPC
37
7
0
17 Mar 2022
Evaluating Feature Attribution Methods in the Image Domain
Evaluating Feature Attribution Methods in the Image Domain
Arne Gevaert
Axel-Jan Rousseau
Thijs Becker
D. Valkenborg
T. D. Bie
Yvan Saeys
FAtt
16
22
0
22 Feb 2022
Don't Lie to Me! Robust and Efficient Explainability with Verified
  Perturbation Analysis
Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis
Thomas Fel
Mélanie Ducoffe
David Vigouroux
Rémi Cadène
Mikael Capelle
C. Nicodeme
Thomas Serre
AAML
20
41
0
15 Feb 2022
DermX: an end-to-end framework for explainable automated dermatological
  diagnosis
DermX: an end-to-end framework for explainable automated dermatological diagnosis
Raluca Jalaboi
F. Faye
Mauricio Orbes-Arteaga
D. Jørgensen
Ole Winther
A. Galimzianova
MedIm
11
17
0
14 Feb 2022
Time to Focus: A Comprehensive Benchmark Using Time Series Attribution
  Methods
Time to Focus: A Comprehensive Benchmark Using Time Series Attribution Methods
Dominique Mercier
Jwalin Bhatt
Andreas Dengel
Sheraz Ahmed
AI4TS
14
11
0
08 Feb 2022
Towards a consistent interpretation of AIOps models
Towards a consistent interpretation of AIOps models
Yingzhe Lyu
Gopi Krishnan Rajbahadur
Dayi Lin
Boyuan Chen
Zhen Ming
Z. Jiang
AI4CE
19
19
0
04 Feb 2022
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
Satyapriya Krishna
Tessa Han
Alex Gu
Steven Wu
S. Jabbari
Himabindu Lakkaraju
174
185
0
03 Feb 2022
Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
17
1
0
30 Jan 2022
Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial
  Contexts
Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial Contexts
Sebastian Bordt
Michèle Finck
Eric Raidl
U. V. Luxburg
AILaw
26
77
0
25 Jan 2022
Deeply Explain CNN via Hierarchical Decomposition
Deeply Explain CNN via Hierarchical Decomposition
Mingg-Ming Cheng
Peng-Tao Jiang
Linghao Han
Liang Wang
Philip H. S. Torr
FAtt
48
15
0
23 Jan 2022
Global explainability in aligned image modalities
Global explainability in aligned image modalities
Justin Engelmann
Amos Storkey
Miguel O. Bernabeu
FAtt
14
4
0
17 Dec 2021
HIVE: Evaluating the Human Interpretability of Visual Explanations
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
66
114
0
06 Dec 2021
Improving Deep Learning Interpretability by Saliency Guided Training
Improving Deep Learning Interpretability by Saliency Guided Training
Aya Abdelsalam Ismail
H. C. Bravo
S. Feizi
FAtt
16
79
0
29 Nov 2021
Evaluation of Interpretability for Deep Learning algorithms in EEG
  Emotion Recognition: A case study in Autism
Evaluation of Interpretability for Deep Learning algorithms in EEG Emotion Recognition: A case study in Autism
J. M. M. Torres
Sara E. Medina-DeVilliers
T. Clarkson
M. Lerner
Giuseppe Riccardi
17
34
0
25 Nov 2021
An Analysis of the Influence of Transfer Learning When Measuring the
  Tortuosity of Blood Vessels
An Analysis of the Influence of Transfer Learning When Measuring the Tortuosity of Blood Vessels
Matheus V. da Silva
Julie Ouellette
Baptiste Lacoste
C. H. Comin
11
7
0
19 Nov 2021
A Practical guide on Explainable AI Techniques applied on Biomedical use
  case applications
A Practical guide on Explainable AI Techniques applied on Biomedical use case applications
Adrien Bennetot
Ivan Donadello
Ayoub El Qadi
M. Dragoni
Thomas Frossard
...
M. Trocan
Raja Chatila
Andreas Holzinger
Artur Garcez
Natalia Díaz Rodríguez
XAI
24
7
0
13 Nov 2021
Self-Interpretable Model with TransformationEquivariant Interpretation
Self-Interpretable Model with TransformationEquivariant Interpretation
Yipei Wang
Xiaoqian Wang
24
23
0
09 Nov 2021
Defense Against Explanation Manipulation
Defense Against Explanation Manipulation
Ruixiang Tang
Ninghao Liu
Fan Yang
Na Zou
Xia Hu
AAML
39
11
0
08 Nov 2021
Gradient Frequency Modulation for Visually Explaining Video
  Understanding Models
Gradient Frequency Modulation for Visually Explaining Video Understanding Models
Xinmiao Lin
Wentao Bao
Matthew Wright
Yu Kong
FAtt
AAML
22
2
0
01 Nov 2021
Transparency of Deep Neural Networks for Medical Image Analysis: A
  Review of Interpretability Methods
Transparency of Deep Neural Networks for Medical Image Analysis: A Review of Interpretability Methods
Zohaib Salahuddin
Henry C. Woodruff
A. Chatterjee
Philippe Lambin
13
301
0
01 Nov 2021
A Survey on the Robustness of Feature Importance and Counterfactual
  Explanations
A Survey on the Robustness of Feature Importance and Counterfactual Explanations
Saumitra Mishra
Sanghamitra Dutta
Jason Long
Daniele Magazzeni
AAML
9
58
0
30 Oct 2021
Reliable and Trustworthy Machine Learning for Health Using Dataset Shift
  Detection
Reliable and Trustworthy Machine Learning for Health Using Dataset Shift Detection
Chunjong Park
Anas Awadalla
Tadayoshi Kohno
Shwetak N. Patel
OOD
19
29
0
26 Oct 2021
VAC-CNN: A Visual Analytics System for Comparative Studies of Deep
  Convolutional Neural Networks
VAC-CNN: A Visual Analytics System for Comparative Studies of Deep Convolutional Neural Networks
Xiwei Xuan
Xiaoyu Zhang
Oh-Hyun Kwon
K. Ma
HAI
16
17
0
25 Oct 2021
Double Trouble: How to not explain a text classifier's decisions using
  counterfactuals synthesized by masked language models?
Double Trouble: How to not explain a text classifier's decisions using counterfactuals synthesized by masked language models?
Thang M. Pham
Trung H. Bui
Long Mai
Anh Totti Nguyen
21
7
0
22 Oct 2021
TorchEsegeta: Framework for Interpretability and Explainability of
  Image-based Deep Learning Models
TorchEsegeta: Framework for Interpretability and Explainability of Image-based Deep Learning Models
S. Chatterjee
Arnab Das
Chirag Mandal
Budhaditya Mukhopadhyay
Manish Vipinraj
Aniruddh Shukla
R. Rao
Chompunuch Sarasaen
Oliver Speck
A. Nürnberger
MedIm
26
14
0
16 Oct 2021
Evaluating the Faithfulness of Importance Measures in NLP by Recursively
  Masking Allegedly Important Tokens and Retraining
Evaluating the Faithfulness of Importance Measures in NLP by Recursively Masking Allegedly Important Tokens and Retraining
Andreas Madsen
Nicholas Meade
Vaibhav Adlakha
Siva Reddy
96
35
0
15 Oct 2021
The Irrationality of Neural Rationale Models
The Irrationality of Neural Rationale Models
Yiming Zheng
Serena Booth
J. Shah
Yilun Zhou
27
16
0
14 Oct 2021
TSGB: Target-Selective Gradient Backprop for Probing CNN Visual Saliency
TSGB: Target-Selective Gradient Backprop for Probing CNN Visual Saliency
Lin Cheng
Pengfei Fang
Yanjie Liang
Liao Zhang
Chunhua Shen
Hanzi Wang
FAtt
17
11
0
11 Oct 2021
Deep Synoptic Monte Carlo Planning in Reconnaissance Blind Chess
Deep Synoptic Monte Carlo Planning in Reconnaissance Blind Chess
Gregory Clark
25
9
0
05 Oct 2021
Consistent Explanations by Contrastive Learning
Consistent Explanations by Contrastive Learning
Vipin Pillai
Soroush Abbasi Koohpayegani
Ashley Ouligian
Dennis Fong
Hamed Pirsiavash
FAtt
18
21
0
01 Oct 2021
Focus! Rating XAI Methods and Finding Biases
Focus! Rating XAI Methods and Finding Biases
Anna Arias-Duart
Ferran Parés
Dario Garcia-Gasulla
Víctor Giménez-Ábalos
13
32
0
28 Sep 2021
Deep Learning-Based Detection of the Acute Respiratory Distress
  Syndrome: What Are the Models Learning?
Deep Learning-Based Detection of the Acute Respiratory Distress Syndrome: What Are the Models Learning?
Gregory B. Rehm
Chao Wang
I. Cortés-Puch
Chen-Nee Chuah
Jason Y. Adams
20
1
0
25 Sep 2021
From Heatmaps to Structural Explanations of Image Classifiers
From Heatmaps to Structural Explanations of Image Classifiers
Li Fuxin
Zhongang Qi
Saeed Khorram
Vivswan Shitole
Prasad Tadepalli
Minsuk Kahng
Alan Fern
XAI
FAtt
23
4
0
13 Sep 2021
IFBiD: Inference-Free Bias Detection
IFBiD: Inference-Free Bias Detection
Ignacio Serna
Daniel DeAlcala
Aythami Morales
Julian Fierrez
J. Ortega-Garcia
CVBM
31
11
0
09 Sep 2021
Deriving Explanation of Deep Visual Saliency Models
Deriving Explanation of Deep Visual Saliency Models
S. Malladi
J. Mukhopadhyay
M. Larabi
S. Chaudhury
FAtt
XAI
16
0
0
08 Sep 2021
PACE: Posthoc Architecture-Agnostic Concept Extractor for Explaining
  CNNs
PACE: Posthoc Architecture-Agnostic Concept Extractor for Explaining CNNs
V. Kamakshi
Uday Gupta
N. C. Krishnan
13
18
0
31 Aug 2021
Previous
1234567
Next