ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.03292
  4. Cited By
Sanity Checks for Saliency Maps

Sanity Checks for Saliency Maps

8 October 2018
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
    FAtt
    AAML
    XAI
ArXivPDFHTML

Papers citing "Sanity Checks for Saliency Maps"

50 / 302 papers shown
Title
TAME: Attention Mechanism Based Feature Fusion for Generating
  Explanation Maps of Convolutional Neural Networks
TAME: Attention Mechanism Based Feature Fusion for Generating Explanation Maps of Convolutional Neural Networks
Mariano V. Ntrougkas
Nikolaos Gkalelis
Vasileios Mezaris
FAtt
11
8
0
18 Jan 2023
Opti-CAM: Optimizing saliency maps for interpretability
Opti-CAM: Optimizing saliency maps for interpretability
Hanwei Zhang
Felipe Torres
R. Sicre
Yannis Avrithis
Stéphane Ayache
28
22
0
17 Jan 2023
Towards Reconciling Usability and Usefulness of Explainable AI
  Methodologies
Towards Reconciling Usability and Usefulness of Explainable AI Methodologies
Pradyumna Tambwekar
Matthew C. Gombolay
28
8
0
13 Jan 2023
Saliency-Augmented Memory Completion for Continual Learning
Saliency-Augmented Memory Completion for Continual Learning
Guangji Bai
Chen Ling
Yuyang Gao
Liang Zhao
CLL
23
4
0
26 Dec 2022
Impossibility Theorems for Feature Attribution
Impossibility Theorems for Feature Attribution
Blair Bilodeau
Natasha Jaques
Pang Wei Koh
Been Kim
FAtt
20
68
0
22 Dec 2022
Robust Explanation Constraints for Neural Networks
Robust Explanation Constraints for Neural Networks
Matthew Wicker
Juyeon Heo
Luca Costabello
Adrian Weller
FAtt
21
17
0
16 Dec 2022
Interpretable ML for Imbalanced Data
Interpretable ML for Imbalanced Data
Damien Dablain
C. Bellinger
Bartosz Krawczyk
D. Aha
Nitesh V. Chawla
22
1
0
15 Dec 2022
On the Relationship Between Explanation and Prediction: A Causal View
On the Relationship Between Explanation and Prediction: A Causal View
Amir-Hossein Karimi
Krikamol Muandet
Simon Kornblith
Bernhard Schölkopf
Been Kim
FAtt
CML
24
14
0
13 Dec 2022
This changes to that : Combining causal and non-causal explanations to
  generate disease progression in capsule endoscopy
This changes to that : Combining causal and non-causal explanations to generate disease progression in capsule endoscopy
Anuja Vats
A. Mohammed
Marius Pedersen
Nirmalie Wiratunga
MedIm
22
9
0
05 Dec 2022
Understanding and Enhancing Robustness of Concept-based Models
Understanding and Enhancing Robustness of Concept-based Models
Sanchit Sinha
Mengdi Huai
Jianhui Sun
Aidong Zhang
AAML
25
18
0
29 Nov 2022
Towards More Robust Interpretation via Local Gradient Alignment
Towards More Robust Interpretation via Local Gradient Alignment
Sunghwan Joo
Seokhyeon Jeong
Juyeon Heo
Adrian Weller
Taesup Moon
FAtt
25
5
0
29 Nov 2022
Attribution-based XAI Methods in Computer Vision: A Review
Attribution-based XAI Methods in Computer Vision: A Review
Kumar Abhishek
Deeksha Kamath
27
18
0
27 Nov 2022
MEGAN: Multi-Explanation Graph Attention Network
MEGAN: Multi-Explanation Graph Attention Network
Jonas Teufel
Luca Torresi
Patrick Reiser
Pascal Friederich
16
8
0
23 Nov 2022
ModelDiff: A Framework for Comparing Learning Algorithms
ModelDiff: A Framework for Comparing Learning Algorithms
Harshay Shah
Sung Min Park
Andrew Ilyas
A. Madry
SyDa
51
26
0
22 Nov 2022
Do graph neural networks learn traditional jet substructure?
Do graph neural networks learn traditional jet substructure?
Farouk Mokhtar
Raghav Kansal
Javier Mauricio Duarte
GNN
34
11
0
17 Nov 2022
Data-Centric Debugging: mitigating model failures via targeted data
  collection
Data-Centric Debugging: mitigating model failures via targeted data collection
Sahil Singla
Atoosa Malemir Chegini
Mazda Moayeri
Soheil Feiz
14
4
0
17 Nov 2022
CRAFT: Concept Recursive Activation FacTorization for Explainability
CRAFT: Concept Recursive Activation FacTorization for Explainability
Thomas Fel
Agustin Picard
Louis Bethune
Thibaut Boissin
David Vigouroux
Julien Colin
Rémi Cadène
Thomas Serre
19
102
0
17 Nov 2022
Interpretable Few-shot Learning with Online Attribute Selection
Interpretable Few-shot Learning with Online Attribute Selection
M. Zarei
Majid Komeili
FAtt
35
1
0
16 Nov 2022
What Makes a Good Explanation?: A Harmonized View of Properties of
  Explanations
What Makes a Good Explanation?: A Harmonized View of Properties of Explanations
Zixi Chen
Varshini Subhash
Marton Havasi
Weiwei Pan
Finale Doshi-Velez
XAI
FAtt
29
18
0
10 Nov 2022
On the Robustness of Explanations of Deep Neural Network Models: A
  Survey
On the Robustness of Explanations of Deep Neural Network Models: A Survey
Amlan Jyoti
Karthik Balaji Ganesh
Manoj Gayala
Nandita Lakshmi Tunuguntla
Sandesh Kamath
V. Balasubramanian
XAI
FAtt
AAML
32
4
0
09 Nov 2022
ViT-CX: Causal Explanation of Vision Transformers
ViT-CX: Causal Explanation of Vision Transformers
Weiyan Xie
Xiao-hui Li
Caleb Chen Cao
Nevin L.Zhang
ViT
24
17
0
06 Nov 2022
BOREx: Bayesian-Optimization--Based Refinement of Saliency Map for
  Image- and Video-Classification Models
BOREx: Bayesian-Optimization--Based Refinement of Saliency Map for Image- and Video-Classification Models
Atsushi Kikuchi
Kotaro Uchida
Masaki Waga
Kohei Suenaga
FAtt
24
1
0
31 Oct 2022
Learning on the Job: Self-Rewarding Offline-to-Online Finetuning for
  Industrial Insertion of Novel Connectors from Vision
Learning on the Job: Self-Rewarding Offline-to-Online Finetuning for Industrial Insertion of Novel Connectors from Vision
Ashvin Nair
Brian Zhu
Gokul Narayanan
Eugen Solowjow
Sergey Levine
OffRL
OnRL
23
14
0
27 Oct 2022
Hierarchical Neyman-Pearson Classification for Prioritizing Severe
  Disease Categories in COVID-19 Patient Data
Hierarchical Neyman-Pearson Classification for Prioritizing Severe Disease Categories in COVID-19 Patient Data
Lijia Wang
Y. X. R. Wang
Jingyi Jessica Li
Xin Tong
16
1
0
01 Oct 2022
Variance Covariance Regularization Enforces Pairwise Independence in
  Self-Supervised Representations
Variance Covariance Regularization Enforces Pairwise Independence in Self-Supervised Representations
Grégoire Mialon
Randall Balestriero
Yann LeCun
24
9
0
29 Sep 2022
Greybox XAI: a Neural-Symbolic learning framework to produce
  interpretable predictions for image classification
Greybox XAI: a Neural-Symbolic learning framework to produce interpretable predictions for image classification
Adrien Bennetot
Gianni Franchi
Javier Del Ser
Raja Chatila
Natalia Díaz Rodríguez
AAML
25
29
0
26 Sep 2022
I-SPLIT: Deep Network Interpretability for Split Computing
I-SPLIT: Deep Network Interpretability for Split Computing
Federico Cunico
Luigi Capogrosso
Francesco Setti
D. Carra
Franco Fummi
Marco Cristani
27
14
0
23 Sep 2022
XClusters: Explainability-first Clustering
XClusters: Explainability-first Clustering
Hyunseung Hwang
Steven Euijong Whang
21
5
0
22 Sep 2022
A model-agnostic approach for generating Saliency Maps to explain
  inferred decisions of Deep Learning Models
A model-agnostic approach for generating Saliency Maps to explain inferred decisions of Deep Learning Models
S. Karatsiolis
A. Kamilaris
FAtt
29
1
0
19 Sep 2022
TCAM: Temporal Class Activation Maps for Object Localization in
  Weakly-Labeled Unconstrained Videos
TCAM: Temporal Class Activation Maps for Object Localization in Weakly-Labeled Unconstrained Videos
Soufiane Belharbi
Ismail Ben Ayed
Luke McCaffrey
Eric Granger
WSOL
39
12
0
30 Aug 2022
Concept-Based Techniques for "Musicologist-friendly" Explanations in a
  Deep Music Classifier
Concept-Based Techniques for "Musicologist-friendly" Explanations in a Deep Music Classifier
Francesco Foscarin
Katharina Hoedt
Verena Praher
A. Flexer
Gerhard Widmer
21
11
0
26 Aug 2022
SoK: Explainable Machine Learning for Computer Security Applications
SoK: Explainable Machine Learning for Computer Security Applications
A. Nadeem
D. Vos
Clinton Cao
Luca Pajola
Simon Dieck
Robert Baumgartner
S. Verwer
29
40
0
22 Aug 2022
HetVis: A Visual Analysis Approach for Identifying Data Heterogeneity in
  Horizontal Federated Learning
HetVis: A Visual Analysis Approach for Identifying Data Heterogeneity in Horizontal Federated Learning
Xumeng Wang
Wei-Neng Chen
Jiazhi Xia
Zhen Wen
Rongchen Zhu
Tobias Schreck
FedML
26
20
0
16 Aug 2022
Gradient Mask: Lateral Inhibition Mechanism Improves Performance in
  Artificial Neural Networks
Gradient Mask: Lateral Inhibition Mechanism Improves Performance in Artificial Neural Networks
Lei Jiang
Yongqing Liu
Shihai Xiao
Yansong Chua
28
0
0
14 Aug 2022
The Weighting Game: Evaluating Quality of Explainability Methods
The Weighting Game: Evaluating Quality of Explainability Methods
Lassi Raatikainen
Esa Rahtu
FAtt
XAI
21
4
0
12 Aug 2022
Shap-CAM: Visual Explanations for Convolutional Neural Networks based on
  Shapley Value
Shap-CAM: Visual Explanations for Convolutional Neural Networks based on Shapley Value
Quan Zheng
Ziwei Wang
Jie Zhou
Jiwen Lu
FAtt
18
31
0
07 Aug 2022
ferret: a Framework for Benchmarking Explainers on Transformers
ferret: a Framework for Benchmarking Explainers on Transformers
Giuseppe Attanasio
Eliana Pastor
C. Bonaventura
Debora Nozza
33
30
0
02 Aug 2022
Adaptive occlusion sensitivity analysis for visually explaining video
  recognition networks
Adaptive occlusion sensitivity analysis for visually explaining video recognition networks
Tomoki Uchiyama
Naoya Sogi
S. Iizuka
Koichiro Niinuma
Kazuhiro Fukui
16
2
0
26 Jul 2022
ScoreCAM GNN: une explication optimale des réseaux profonds sur
  graphes
ScoreCAM GNN: une explication optimale des réseaux profonds sur graphes
Adrien Raison
Pascal Bourdon
David Helbert
FAtt
GNN
19
0
0
26 Jul 2022
LightX3ECG: A Lightweight and eXplainable Deep Learning System for
  3-lead Electrocardiogram Classification
LightX3ECG: A Lightweight and eXplainable Deep Learning System for 3-lead Electrocardiogram Classification
Khiem H. Le
Hieu H. Pham
Thao BT. Nguyen
Tu Nguyen
T. Thanh
Cuong D. Do
18
34
0
25 Jul 2022
XG-BoT: An Explainable Deep Graph Neural Network for Botnet Detection
  and Forensics
XG-BoT: An Explainable Deep Graph Neural Network for Botnet Detection and Forensics
Wai Weng Lo
Gayan K. Kulatilleke
Mohanad Sarhan
S. Layeghy
Marius Portmann
24
41
0
19 Jul 2022
BASED-XAI: Breaking Ablation Studies Down for Explainable Artificial
  Intelligence
BASED-XAI: Breaking Ablation Studies Down for Explainable Artificial Intelligence
Isha Hameed
Samuel Sharpe
Daniel Barcklow
Justin Au-yeung
Sahil Verma
Jocelyn Huang
Brian Barr
C. B. Bruss
35
14
0
12 Jul 2022
Distilling Model Failures as Directions in Latent Space
Distilling Model Failures as Directions in Latent Space
Saachi Jain
Hannah Lawrence
Ankur Moitra
A. Madry
18
89
0
29 Jun 2022
Auditing Visualizations: Transparency Methods Struggle to Detect
  Anomalous Behavior
Auditing Visualizations: Transparency Methods Struggle to Detect Anomalous Behavior
Jean-Stanislas Denain
Jacob Steinhardt
AAML
31
7
0
27 Jun 2022
Towards ML Methods for Biodiversity: A Novel Wild Bee Dataset and
  Evaluations of XAI Methods for ML-Assisted Rare Species Annotations
Towards ML Methods for Biodiversity: A Novel Wild Bee Dataset and Evaluations of XAI Methods for ML-Assisted Rare Species Annotations
Teodor Chiaburu
F. Biessmann
Frank Haußer
30
2
0
15 Jun 2022
A Functional Information Perspective on Model Interpretation
A Functional Information Perspective on Model Interpretation
Itai Gat
Nitay Calderon
Roi Reichart
Tamir Hazan
AAML
FAtt
33
6
0
12 Jun 2022
Towards better Interpretable and Generalizable AD detection using
  Collective Artificial Intelligence
Towards better Interpretable and Generalizable AD detection using Collective Artificial Intelligence
H. Nguyen
Michael Clement
Boris Mansencal
Pierrick Coupé
MedIm
31
6
0
07 Jun 2022
A Human-Centric Take on Model Monitoring
A Human-Centric Take on Model Monitoring
Murtuza N. Shergadwala
Himabindu Lakkaraju
K. Kenthapadi
37
9
0
06 Jun 2022
Dual Decomposition of Convex Optimization Layers for Consistent
  Attention in Medical Images
Dual Decomposition of Convex Optimization Layers for Consistent Attention in Medical Images
Tom Ron
M. Weiler-Sagie
Tamir Hazan
FAtt
MedIm
19
6
0
06 Jun 2022
Use-Case-Grounded Simulations for Explanation Evaluation
Use-Case-Grounded Simulations for Explanation Evaluation
Valerie Chen
Nari Johnson
Nicholay Topin
Gregory Plumb
Ameet Talwalkar
FAtt
ELM
20
24
0
05 Jun 2022
Previous
1234567
Next