Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2002.00772
Cited By
Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study
3 February 2020
Ahmed Alqaraawi
M. Schuessler
Philipp Weiß
Enrico Costanza
N. Bianchi-Berthouze
AAML
FAtt
XAI
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study"
50 / 87 papers shown
Title
Evaluating Model Explanations without Ground Truth
Kaivalya Rawal
Zihao Fu
Eoin Delaney
Chris Russell
FAtt
XAI
27
0
0
15 May 2025
What Do People Want to Know About Artificial Intelligence (AI)? The Importance of Answering End-User Questions to Explain Autonomous Vehicle (AV) Decisions
Somayeh Molaei
Lionel P. Robert
Nikola Banovic
21
0
0
09 May 2025
What Makes for a Good Saliency Map? Comparing Strategies for Evaluating Saliency Maps in Explainable AI (XAI)
Felix Kares
Timo Speith
Hanwei Zhang
Markus Langer
FAtt
XAI
38
0
0
23 Apr 2025
From Abstract to Actionable: Pairwise Shapley Values for Explainable AI
Jiaxin Xu
Hung Chau
Angela Burden
TDI
48
0
0
18 Feb 2025
Identifying Bias in Deep Neural Networks Using Image Transforms
Sai Teja Erukude
Akhil Joshi
Lior Shamir
70
1
0
17 Dec 2024
Towards Human-centered Design of Explainable Artificial Intelligence (XAI): A Survey of Empirical Studies
Shuai Ma
24
0
0
28 Oct 2024
Fool Me Once? Contrasting Textual and Visual Explanations in a Clinical Decision-Support Setting
Maxime Kayser
Bayar I. Menzat
Cornelius Emde
Bogdan Bercean
Alex Novak
Abdala Espinosa
B. Papież
Susanne Gaube
Thomas Lukasiewicz
Oana-Maria Camburu
23
1
0
16 Oct 2024
Explainable AI Reloaded: Challenging the XAI Status Quo in the Era of Large Language Models
Upol Ehsan
Mark O. Riedl
23
2
0
09 Aug 2024
Automatic rating of incomplete hippocampal inversions evaluated across multiple cohorts
Lisa Hemforth
B. Couvy-Duchesne
Kevin de Matos
Camille Brianceau
Matthieu Joulot
...
V. Frouin
Alexandre Martin
IMAGEN study group
C. Cury
O. Colliot
22
1
0
05 Aug 2024
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
Nitay Calderon
Roi Reichart
32
10
0
27 Jul 2024
Graphical Perception of Saliency-based Model Explanations
Yayan Zhao
Mingwei Li
Matthew Berger
XAI
FAtt
36
2
0
11 Jun 2024
The AI-DEC: A Card-based Design Method for User-centered AI Explanations
Christine P. Lee
M. Lee
Bilge Mutlu
HAI
33
4
0
26 May 2024
Explaining Multi-modal Large Language Models by Analyzing their Vision Perception
Loris Giulivi
Giacomo Boracchi
33
2
0
23 May 2024
Concept Visualization: Explaining the CLIP Multi-modal Embedding Using WordNet
Loris Giulivi
Giacomo Boracchi
19
2
0
23 May 2024
Unraveling the Dilemma of AI Errors: Exploring the Effectiveness of Human and Machine Explanations for Large Language Models
Marvin Pafla
Kate Larson
Mark Hancock
22
6
0
11 Apr 2024
How explainable AI affects human performance: A systematic review of the behavioural consequences of saliency maps
Romy Müller
HAI
35
6
0
03 Apr 2024
Improving deep learning with prior knowledge and cognitive models: A survey on enhancing explainability, adversarial robustness and zero-shot learning
F. Mumuni
A. Mumuni
AAML
27
5
0
11 Mar 2024
Can Interpretability Layouts Influence Human Perception of Offensive Sentences?
Thiago Freitas dos Santos
Nardine Osman
Marco Schorlemmer
19
0
0
01 Mar 2024
Reimagining Anomalies: What If Anomalies Were Normal?
Philipp Liznerski
Saurabh Varshneya
Ece Calikus
Sophie Fellenz
Marius Kloft
33
4
0
22 Feb 2024
OpenHEXAI: An Open-Source Framework for Human-Centered Evaluation of Explainable Machine Learning
Jiaqi Ma
Vivian Lai
Yiming Zhang
Chacha Chen
Paul Hamilton
Davor Ljubenkov
Himabindu Lakkaraju
Chenhao Tan
ELM
19
3
0
20 Feb 2024
Explaining Time Series via Contrastive and Locally Sparse Perturbations
Zichuan Liu
Yingying Zhang
Tianchun Wang
Zefan Wang
Dongsheng Luo
...
Min Wu
Yi Wang
Chunlin Chen
Lunting Fan
Qingsong Wen
25
10
0
16 Jan 2024
Decoding AI's Nudge: A Unified Framework to Predict Human Behavior in AI-assisted Decision Making
Zhuoyan Li
Zhuoran Lu
Ming Yin
11
11
0
11 Jan 2024
ALMANACS: A Simulatability Benchmark for Language Model Explainability
Edmund Mills
Shiye Su
Stuart J. Russell
Scott Emmons
46
7
0
20 Dec 2023
Error Discovery by Clustering Influence Embeddings
Fulton Wang
Julius Adebayo
Sarah Tan
Diego Garcia-Olano
Narine Kokhlikyan
11
3
0
07 Dec 2023
Understanding Parameter Saliency via Extreme Value Theory
Shuo Wang
Issei Sato
AAML
FAtt
9
0
0
27 Oct 2023
Predictability and Comprehensibility in Post-Hoc XAI Methods: A User-Centered Analysis
Anahid N. Jalali
Bernhard Haslhofer
Simone Kriglstein
Andreas Rauber
FAtt
17
4
0
21 Sep 2023
TExplain: Explaining Learned Visual Features via Pre-trained (Frozen) Language Models
Saeid Asgari Taghanaki
Aliasghar Khani
Ali Saheb Pasand
Amir Khasahmadi
Aditya Sanghi
K. Willis
Ali Mahdavi-Amiri
FAtt
VLM
14
0
0
01 Sep 2023
FINER: Enhancing State-of-the-art Classifiers with Feature Attribution to Facilitate Security Analysis
Yiling He
Jian Lou
Zhan Qin
Kui Ren
FAtt
AAML
23
7
0
10 Aug 2023
Understanding the Effect of Counterfactual Explanations on Trust and Reliance on AI for Human-AI Collaborative Clinical Decision Making
Min Hun Lee
Chong Jun Chew
6
28
0
08 Aug 2023
Interpretable Sparsification of Brain Graphs: Better Practices and Effective Designs for Graph Neural Networks
Gao Li
M. Duda
X. Zhang
Danai Koutra
Yujun Yan
6
8
0
26 Jun 2023
Towards Robust Aspect-based Sentiment Analysis through Non-counterfactual Augmentations
Xinyu Liu
Yanl Ding
Kaikai An
Chunyang Xiao
Pranava Madhyastha
Tong Xiao
Jingbo Zhu
14
2
0
24 Jun 2023
In Search of Verifiability: Explanations Rarely Enable Complementary Performance in AI-Advised Decision Making
Raymond Fok
Daniel S. Weld
19
61
0
12 May 2023
Multimodal Understanding Through Correlation Maximization and Minimization
Yi Shi
Marc Niethammer
30
0
0
04 May 2023
Explainability in AI Policies: A Critical Review of Communications, Reports, Regulations, and Standards in the EU, US, and UK
L. Nannini
Agathe Balayn
A. Smith
6
37
0
20 Apr 2023
Performance of GAN-based augmentation for deep learning COVID-19 image classification
Oleksandr Fedoruk
Konrad Klimaszewski
Aleksander Ogonowski
Rafal Mo.zd.zonek
OOD
MedIm
6
11
0
18 Apr 2023
How good Neural Networks interpretation methods really are? A quantitative benchmark
Antoine Passemiers
Pietro Folco
D. Raimondi
G. Birolo
Yves Moreau
P. Fariselli
FAtt
11
2
0
05 Apr 2023
Model-agnostic explainable artificial intelligence for object detection in image data
M. Moradi
Ke Yan
David Colwell
Matthias Samwald
Rhona Asgari
AAML
41
7
0
30 Mar 2023
IRIS: Interpretable Rubric-Informed Segmentation for Action Quality Assessment
Hitoshi Matsuyama
Nobuo Kawaguchi
Brian Y. Lim
18
7
0
16 Mar 2023
The Generalizability of Explanations
Hanxiao Tan
FAtt
8
1
0
23 Feb 2023
Understanding User Preferences in Explainable Artificial Intelligence: A Survey and a Mapping Function Proposal
M. Hashemi
Ali Darejeh
Francisco Cruz
35
3
0
07 Feb 2023
Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI
Upol Ehsan
Koustuv Saha
M. D. Choudhury
Mark O. Riedl
18
57
0
01 Feb 2023
Explainable Deep Reinforcement Learning: State of the Art and Challenges
G. Vouros
XAI
43
76
0
24 Jan 2023
On the Relationship Between Explanation and Prediction: A Causal View
Amir-Hossein Karimi
Krikamol Muandet
Simon Kornblith
Bernhard Schölkopf
Been Kim
FAtt
CML
24
14
0
13 Dec 2022
Post hoc Explanations may be Ineffective for Detecting Unknown Spurious Correlation
Julius Adebayo
M. Muelly
H. Abelson
Been Kim
16
86
0
09 Dec 2022
A Rigorous Study Of The Deep Taylor Decomposition
Leon Sixt
Tim Landgraf
FAtt
AAML
17
4
0
14 Nov 2022
Towards Human-centered Explainable AI: A Survey of User Studies for Model Explanations
Yao Rong
Tobias Leemann
Thai-trang Nguyen
Lisa Fiedler
Peizhu Qian
Vaibhav Unhelkar
Tina Seidel
Gjergji Kasneci
Enkelejda Kasneci
ELM
25
91
0
20 Oct 2022
Assessing Out-of-Domain Language Model Performance from Few Examples
Prasann Singhal
Jarad Forristal
Xi Ye
Greg Durrett
LRM
18
5
0
13 Oct 2022
Responsibility: An Example-based Explainable AI approach via Training Process Inspection
Faraz Khadivpour
Arghasree Banerjee
Matthew J. Guzdial
XAI
14
2
0
07 Sep 2022
Monitoring Shortcut Learning using Mutual Information
Mohammed Adnan
Yani Andrew Ioannou
Chuan-Yung Tsai
A. Galloway
H. R. Tizhoosh
Graham W. Taylor
12
5
0
27 Jun 2022
Comparison of attention models and post-hoc explanation methods for embryo stage identification: a case study
T. Gomez
Thomas Fréour
Harold Mouchère
11
3
0
13 May 2022
1
2
Next