ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.03292
  4. Cited By
Sanity Checks for Saliency Maps

Sanity Checks for Saliency Maps

8 October 2018
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
    FAtt
    AAML
    XAI
ArXivPDFHTML

Papers citing "Sanity Checks for Saliency Maps"

50 / 302 papers shown
Title
Towards Out-Of-Distribution Generalization: A Survey
Towards Out-Of-Distribution Generalization: A Survey
Jiashuo Liu
Zheyan Shen
Yue He
Xingxuan Zhang
Renzhe Xu
Han Yu
Peng Cui
CML
OOD
31
515
0
31 Aug 2021
Feature Importance in a Deep Learning Climate Emulator
Feature Importance in a Deep Learning Climate Emulator
Wei-ping Xu
Xihaier Luo
Yihui Ren
Ji Hwan Park
Shinjae Yoo
B. Nadiga
FAtt
AI4TS
29
3
0
27 Aug 2021
Longitudinal Distance: Towards Accountable Instance Attribution
Longitudinal Distance: Towards Accountable Instance Attribution
Rosina O. Weber
Prateek Goel
S. Amiri
G. Simpson
11
0
0
23 Aug 2021
GCExplainer: Human-in-the-Loop Concept-based Explanations for Graph
  Neural Networks
GCExplainer: Human-in-the-Loop Concept-based Explanations for Graph Neural Networks
Lucie Charlotte Magister
Dmitry Kazhdan
Vikash Singh
Pietro Lió
27
48
0
25 Jul 2021
Robust Counterfactual Explanations on Graph Neural Networks
Robust Counterfactual Explanations on Graph Neural Networks
Mohit Bajaj
Lingyang Chu
Zihui Xue
J. Pei
Lanjun Wang
P. C. Lam
Yong Zhang
OOD
35
96
0
08 Jul 2021
Challenges for machine learning in clinical translation of big data
  imaging studies
Challenges for machine learning in clinical translation of big data imaging studies
Nicola K. Dinsdale
Emma Bluemke
V. Sundaresan
M. Jenkinson
Stephen Smith
Ana I. L. Namburete
AI4CE
32
41
0
07 Jul 2021
Knowledge-Grounded Self-Rationalization via Extractive and Natural
  Language Explanations
Knowledge-Grounded Self-Rationalization via Extractive and Natural Language Explanations
Bodhisattwa Prasad Majumder
Oana-Maria Camburu
Thomas Lukasiewicz
Julian McAuley
23
35
0
25 Jun 2021
Evaluation of Saliency-based Explainability Method
Evaluation of Saliency-based Explainability Method
Sam Zabdiel Sunder Samuel
V. Kamakshi
Namrata Lodhi
N. C. Krishnan
FAtt
XAI
21
12
0
24 Jun 2021
Synthetic Benchmarks for Scientific Research in Explainable Machine
  Learning
Synthetic Benchmarks for Scientific Research in Explainable Machine Learning
Yang Liu
Sujay Khandagale
Colin White
W. Neiswanger
28
65
0
23 Jun 2021
CAMERAS: Enhanced Resolution And Sanity preserving Class Activation
  Mapping for image saliency
CAMERAS: Enhanced Resolution And Sanity preserving Class Activation Mapping for image saliency
M. Jalwana
Naveed Akhtar
Bennamoun
Ajmal Saeed Mian
22
54
0
20 Jun 2021
Characterizing the risk of fairwashing
Characterizing the risk of fairwashing
Ulrich Aivodji
Hiromi Arai
Sébastien Gambs
Satoshi Hara
20
27
0
14 Jun 2021
Certification of embedded systems based on Machine Learning: A survey
Certification of embedded systems based on Machine Learning: A survey
Guillaume Vidot
Christophe Gabreau
I. Ober
Iulian Ober
6
12
0
14 Jun 2021
3DB: A Framework for Debugging Computer Vision Models
3DB: A Framework for Debugging Computer Vision Models
Guillaume Leclerc
Hadi Salman
Andrew Ilyas
Sai H. Vemprala
Logan Engstrom
...
Pengchuan Zhang
Shibani Santurkar
Greg Yang
Ashish Kapoor
A. Madry
38
40
0
07 Jun 2021
Leveraging Sparse Linear Layers for Debuggable Deep Networks
Leveraging Sparse Linear Layers for Debuggable Deep Networks
Eric Wong
Shibani Santurkar
A. Madry
FAtt
20
88
0
11 May 2021
Do Feature Attribution Methods Correctly Attribute Features?
Do Feature Attribution Methods Correctly Attribute Features?
Yilun Zhou
Serena Booth
Marco Tulio Ribeiro
J. Shah
FAtt
XAI
22
132
0
27 Apr 2021
Exploiting Explanations for Model Inversion Attacks
Exploiting Explanations for Model Inversion Attacks
Xu Zhao
Wencan Zhang
Xiao Xiao
Brian Y. Lim
MIACV
21
82
0
26 Apr 2021
Towards Rigorous Interpretations: a Formalisation of Feature Attribution
Towards Rigorous Interpretations: a Formalisation of Feature Attribution
Darius Afchar
Romain Hennequin
Vincent Guigue
FAtt
29
20
0
26 Apr 2021
EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep
  learning representations with expert knowledge graphs: the MonuMAI cultural
  heritage use case
EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: the MonuMAI cultural heritage use case
Natalia Díaz Rodríguez
Alberto Lamas
Jules Sanchez
Gianni Franchi
Ivan Donadello
S. Tabik
David Filliat
P. Cruz
Rosana Montes
Francisco Herrera
47
77
0
24 Apr 2021
Towards a Collective Agenda on AI for Earth Science Data Analysis
Towards a Collective Agenda on AI for Earth Science Data Analysis
D. Tuia
R. Roscher
Jan Dirk Wegner
Nathan Jacobs
Xiaoxiang Zhu
Gustau Camps-Valls
AI4CE
39
68
0
11 Apr 2021
Deep Interpretable Models of Theory of Mind
Deep Interpretable Models of Theory of Mind
Ini Oguntola
Dana Hughes
Katia P. Sycara
HAI
25
23
0
07 Apr 2021
Evaluating explainable artificial intelligence methods for multi-label
  deep learning classification tasks in remote sensing
Evaluating explainable artificial intelligence methods for multi-label deep learning classification tasks in remote sensing
Ioannis Kakogeorgiou
Konstantinos Karantzalos
XAI
23
118
0
03 Apr 2021
Model Selection's Disparate Impact in Real-World Deep Learning
  Applications
Model Selection's Disparate Impact in Real-World Deep Learning Applications
Jessica Zosa Forde
A. Feder Cooper
Kweku Kwegyir-Aggrey
Chris De Sa
Michael Littman
11
22
0
01 Apr 2021
Explaining COVID-19 and Thoracic Pathology Model Predictions by
  Identifying Informative Input Features
Explaining COVID-19 and Thoracic Pathology Model Predictions by Identifying Informative Input Features
Ashkan Khakzar
Yang Zhang
W. Mansour
Yuezhi Cai
Yawei Li
Yucheng Zhang
Seong Tae Kim
Nassir Navab
FAtt
44
17
0
01 Apr 2021
Group-CAM: Group Score-Weighted Visual Explanations for Deep
  Convolutional Networks
Group-CAM: Group Score-Weighted Visual Explanations for Deep Convolutional Networks
Qing-Long Zhang
Lu Rao
Yubin Yang
11
58
0
25 Mar 2021
IAIA-BL: A Case-based Interpretable Deep Learning Model for
  Classification of Mass Lesions in Digital Mammography
IAIA-BL: A Case-based Interpretable Deep Learning Model for Classification of Mass Lesions in Digital Mammography
A. Barnett
F. Schwartz
Chaofan Tao
Chaofan Chen
Yinhao Ren
J. Lo
Cynthia Rudin
31
133
0
23 Mar 2021
Robust Models Are More Interpretable Because Attributions Look Normal
Robust Models Are More Interpretable Because Attributions Look Normal
Zifan Wang
Matt Fredrikson
Anupam Datta
OOD
FAtt
28
25
0
20 Mar 2021
Interpretable Machine Learning: Fundamental Principles and 10 Grand
  Challenges
Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges
Cynthia Rudin
Chaofan Chen
Zhi Chen
Haiyang Huang
Lesia Semenova
Chudi Zhong
FaML
AI4CE
LRM
53
651
0
20 Mar 2021
Beyond Trivial Counterfactual Explanations with Diverse Valuable
  Explanations
Beyond Trivial Counterfactual Explanations with Diverse Valuable Explanations
Pau Rodríguez López
Massimo Caccia
Alexandre Lacoste
L. Zamparo
I. Laradji
Laurent Charlin
David Vazquez
AAML
29
55
0
18 Mar 2021
Neural Network Attribution Methods for Problems in Geoscience: A Novel
  Synthetic Benchmark Dataset
Neural Network Attribution Methods for Problems in Geoscience: A Novel Synthetic Benchmark Dataset
Antonios Mamalakis
I. Ebert‐Uphoff
E. Barnes
OOD
23
75
0
18 Mar 2021
Interpretable Machine Learning: Moving From Mythos to Diagnostics
Interpretable Machine Learning: Moving From Mythos to Diagnostics
Valerie Chen
Jeffrey Li
Joon Sik Kim
Gregory Plumb
Ameet Talwalkar
24
29
0
10 Mar 2021
Detecting Spurious Correlations with Sanity Tests for Artificial
  Intelligence Guided Radiology Systems
Detecting Spurious Correlations with Sanity Tests for Artificial Intelligence Guided Radiology Systems
U. Mahmood
Robik Shrestha
D. Bates
L. Mannelli
G. Corrias
Y. Erdi
Christopher Kanan
16
16
0
04 Mar 2021
Have We Learned to Explain?: How Interpretability Methods Can Learn to
  Encode Predictions in their Interpretations
Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their Interpretations
N. Jethani
Mukund Sudarshan
Yindalon Aphinyanagphongs
Rajesh Ranganath
FAtt
82
70
0
02 Mar 2021
Do Input Gradients Highlight Discriminative Features?
Do Input Gradients Highlight Discriminative Features?
Harshay Shah
Prateek Jain
Praneeth Netrapalli
AAML
FAtt
21
57
0
25 Feb 2021
Gifsplanation via Latent Shift: A Simple Autoencoder Approach to
  Counterfactual Generation for Chest X-rays
Gifsplanation via Latent Shift: A Simple Autoencoder Approach to Counterfactual Generation for Chest X-rays
Joseph Paul Cohen
Rupert Brooks
Sovann En
Evan Zucker
Anuj Pareek
M. Lungren
Akshay S. Chaudhari
FAtt
MedIm
29
3
0
18 Feb 2021
Connecting Interpretability and Robustness in Decision Trees through
  Separation
Connecting Interpretability and Robustness in Decision Trees through Separation
Michal Moshkovitz
Yao-Yuan Yang
Kamalika Chaudhuri
25
22
0
14 Feb 2021
Explainability of deep vision-based autonomous driving systems: Review
  and challenges
Explainability of deep vision-based autonomous driving systems: Review and challenges
Éloi Zablocki
H. Ben-younes
P. Pérez
Matthieu Cord
XAI
37
169
0
13 Jan 2021
Robust Machine Learning Systems: Challenges, Current Trends,
  Perspectives, and the Road Ahead
Robust Machine Learning Systems: Challenges, Current Trends, Perspectives, and the Road Ahead
Muhammad Shafique
Mahum Naseer
T. Theocharides
C. Kyrkou
O. Mutlu
Lois Orosa
Jungwook Choi
OOD
73
100
0
04 Jan 2021
On Baselines for Local Feature Attributions
On Baselines for Local Feature Attributions
Johannes Haug
Stefan Zurn
Peter El-Jiz
Gjergji Kasneci
FAtt
8
31
0
04 Jan 2021
iGOS++: Integrated Gradient Optimized Saliency by Bilateral
  Perturbations
iGOS++: Integrated Gradient Optimized Saliency by Bilateral Perturbations
Saeed Khorram
T. Lawson
Fuxin Li
AAML
FAtt
11
26
0
31 Dec 2020
Understanding Learned Reward Functions
Understanding Learned Reward Functions
Eric J. Michaud
Adam Gleave
Stuart J. Russell
XAI
OffRL
9
33
0
10 Dec 2020
Debiased-CAM to mitigate image perturbations with faithful visual
  explanations of machine learning
Debiased-CAM to mitigate image perturbations with faithful visual explanations of machine learning
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
16
18
0
10 Dec 2020
Interpretable Graph Capsule Networks for Object Recognition
Interpretable Graph Capsule Networks for Object Recognition
Jindong Gu
Volker Tresp
FAtt
9
36
0
03 Dec 2020
ProtoPShare: Prototype Sharing for Interpretable Image Classification
  and Similarity Discovery
ProtoPShare: Prototype Sharing for Interpretable Image Classification and Similarity Discovery
Dawid Rymarczyk
Lukasz Struski
Jacek Tabor
Bartosz Zieliñski
19
111
0
29 Nov 2020
Reflective-Net: Learning from Explanations
Reflective-Net: Learning from Explanations
Johannes Schneider
Michalis Vlachos
FAtt
OffRL
LRM
52
18
0
27 Nov 2020
Explaining by Removing: A Unified Framework for Model Explanation
Explaining by Removing: A Unified Framework for Model Explanation
Ian Covert
Scott M. Lundberg
Su-In Lee
FAtt
24
241
0
21 Nov 2020
Robust and Stable Black Box Explanations
Robust and Stable Black Box Explanations
Himabindu Lakkaraju
Nino Arsov
Osbert Bastani
AAML
FAtt
11
84
0
12 Nov 2020
Quantifying Learnability and Describability of Visual Concepts Emerging
  in Representation Learning
Quantifying Learnability and Describability of Visual Concepts Emerging in Representation Learning
Iro Laina
Ruth C. Fong
Andrea Vedaldi
OCL
20
13
0
27 Oct 2020
Exemplary Natural Images Explain CNN Activations Better than
  State-of-the-Art Feature Visualization
Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization
Judy Borowski
Roland S. Zimmermann
Judith Schepers
Robert Geirhos
Thomas S. A. Wallis
Matthias Bethge
Wieland Brendel
FAtt
34
7
0
23 Oct 2020
A Survey on Deep Learning and Explainability for Automatic Report
  Generation from Medical Images
A Survey on Deep Learning and Explainability for Automatic Report Generation from Medical Images
Pablo Messina
Pablo Pino
Denis Parra
Alvaro Soto
Cecilia Besa
S. Uribe
Marcelo andía
C. Tejos
Claudia Prieto
Daniel Capurro
MedIm
28
62
0
20 Oct 2020
Interpretable Machine Learning -- A Brief History, State-of-the-Art and
  Challenges
Interpretable Machine Learning -- A Brief History, State-of-the-Art and Challenges
Christoph Molnar
Giuseppe Casalicchio
B. Bischl
AI4TS
AI4CE
10
397
0
19 Oct 2020
Previous
1234567
Next