ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.00772
  4. Cited By
Evaluating Saliency Map Explanations for Convolutional Neural Networks:
  A User Study

Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study

International Conference on Intelligent User Interfaces (IUI), 2020
3 February 2020
Ahmed Alqaraawi
M. Schuessler
Philipp Weiß
Enrico Costanza
N. Bianchi-Berthouze
    AAMLFAttXAI
ArXiv (abs)PDFHTML

Papers citing "Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study"

50 / 90 papers shown
Title
From Prediction to Explanation: Multimodal, Explainable, and Interactive Deepfake Detection Framework for Non-Expert Users
From Prediction to Explanation: Multimodal, Explainable, and Interactive Deepfake Detection Framework for Non-Expert Users
Shahroz Tariq
Simon S. Woo
Priyanka Singh
Irena Irmalasari
Saakshi Gupta
Dev Gupta
95
2
0
11 Aug 2025
CASE: Contrastive Activation for Saliency Estimation
CASE: Contrastive Activation for Saliency Estimation
Dane Williamson
Yangfeng Ji
Matthew B. Dwyer
FAttAAML
289
0
0
08 Jun 2025
Multi-Domain Explainability of Preferences
Multi-Domain Explainability of Preferences
Nitay Calderon
Liat Ein-Dor
Roi Reichart
LRM
262
0
0
26 May 2025
Explanation User Interfaces: A Systematic Literature Review
Explanation User Interfaces: A Systematic Literature Review
Eleonora Cappuccio
Andrea Esposito
Francesco Greco
Giuseppe Desolda
Rosa Lanzilotti
Salvatore Rinzivillo
235
0
0
26 May 2025
Evaluating Model Explanations without Ground Truth
Evaluating Model Explanations without Ground TruthConference on Fairness, Accountability and Transparency (FAccT), 2025
Kaivalya Rawal
Zihao Fu
Eoin Delaney
Chris Russell
FAttXAI
261
1
0
15 May 2025
What Do People Want to Know About Artificial Intelligence (AI)? The Importance of Answering End-User Questions to Explain Autonomous Vehicle (AV) Decisions
What Do People Want to Know About Artificial Intelligence (AI)? The Importance of Answering End-User Questions to Explain Autonomous Vehicle (AV) DecisionsProceedings of the ACM on Human-Computer Interaction (PACMHCI), 2025
Somayeh Molaei
Lionel P. Robert
Nikola Banovic
129
0
0
09 May 2025
What Makes for a Good Saliency Map? Comparing Strategies for Evaluating Saliency Maps in Explainable AI (XAI)
What Makes for a Good Saliency Map? Comparing Strategies for Evaluating Saliency Maps in Explainable AI (XAI)
Felix Kares
Timo Speith
Hanwei Zhang
Markus Langer
FAttXAI
303
4
0
23 Apr 2025
From Abstract to Actionable: Pairwise Shapley Values for Explainable AI
From Abstract to Actionable: Pairwise Shapley Values for Explainable AI
Jiaxin Xu
Hung Chau
Angela Burden
TDI
314
2
0
18 Feb 2025
Identifying Bias in Deep Neural Networks Using Image Transforms
Identifying Bias in Deep Neural Networks Using Image TransformsDe Computis (DC), 2024
Sai Teja Erukude
Akhil Joshi
Lior Shamir
173
5
0
17 Dec 2024
Towards Human-centered Design of Explainable Artificial Intelligence
  (XAI): A Survey of Empirical Studies
Towards Human-centered Design of Explainable Artificial Intelligence (XAI): A Survey of Empirical Studies
Shuai Ma
202
5
0
28 Oct 2024
Fool Me Once? Contrasting Textual and Visual Explanations in a Clinical
  Decision-Support Setting
Fool Me Once? Contrasting Textual and Visual Explanations in a Clinical Decision-Support SettingConference on Empirical Methods in Natural Language Processing (EMNLP), 2024
Maxime Kayser
Bayar I. Menzat
Cornelius Emde
Bogdan Bercean
Alex Novak
Abdala Espinosa
B. Papież
Susanne Gaube
Thomas Lukasiewicz
Oana-Maria Camburu
353
10
0
16 Oct 2024
Explainable AI Reloaded: Challenging the XAI Status Quo in the Era of
  Large Language Models
Explainable AI Reloaded: Challenging the XAI Status Quo in the Era of Large Language Models
Upol Ehsan
Mark O. Riedl
208
7
0
09 Aug 2024
Automatic rating of incomplete hippocampal inversions evaluated across multiple cohorts
Automatic rating of incomplete hippocampal inversions evaluated across multiple cohortsMachine Learning for Biomedical Imaging (MLBI), 2024
Lisa Hemforth
B. Couvy-Duchesne
Kevin de Matos
Camille Brianceau
Matthieu Joulot
...
V. Frouin
Alexandre Martin
IMAGEN study group
C. Cury
O. Colliot
184
1
0
05 Aug 2024
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
Nitay Calderon
Roi Reichart
302
23
0
27 Jul 2024
Graphical Perception of Saliency-based Model Explanations
Graphical Perception of Saliency-based Model Explanations
Yayan Zhao
Mingwei Li
Matthew Berger
XAIFAtt
238
2
0
11 Jun 2024
The AI-DEC: A Card-based Design Method for User-centered AI Explanations
The AI-DEC: A Card-based Design Method for User-centered AI Explanations
Christine P. Lee
M. Lee
Bilge Mutlu
HAI
193
9
0
26 May 2024
Explaining Multi-modal Large Language Models by Analyzing their Vision
  Perception
Explaining Multi-modal Large Language Models by Analyzing their Vision PerceptionBritish Machine Vision Conference (BMVC), 2024
Loris Giulivi
Giacomo Boracchi
146
3
0
23 May 2024
Concept Visualization: Explaining the CLIP Multi-modal Embedding Using
  WordNet
Concept Visualization: Explaining the CLIP Multi-modal Embedding Using WordNetIEEE International Joint Conference on Neural Network (IJCNN), 2024
Loris Giulivi
Giacomo Boracchi
165
2
0
23 May 2024
Unraveling the Dilemma of AI Errors: Exploring the Effectiveness of
  Human and Machine Explanations for Large Language Models
Unraveling the Dilemma of AI Errors: Exploring the Effectiveness of Human and Machine Explanations for Large Language Models
Marvin Pafla
Kate Larson
Mark Hancock
165
7
0
11 Apr 2024
How explainable AI affects human performance: A systematic review of the
  behavioural consequences of saliency maps
How explainable AI affects human performance: A systematic review of the behavioural consequences of saliency mapsInternational journal of human computer interactions (IJHCI), 2024
Romy Müller
HAI
188
12
0
03 Apr 2024
Improving deep learning with prior knowledge and cognitive models: A
  survey on enhancing explainability, adversarial robustness and zero-shot
  learning
Improving deep learning with prior knowledge and cognitive models: A survey on enhancing explainability, adversarial robustness and zero-shot learningCognitive Systems Research (Cogn. Syst. Res.), 2023
F. Mumuni
A. Mumuni
AAML
258
16
0
11 Mar 2024
Can Interpretability Layouts Influence Human Perception of Offensive Sentences?
Can Interpretability Layouts Influence Human Perception of Offensive Sentences?
Thiago Freitas dos Santos
Nardine Osman
Marco Schorlemmer
153
0
0
01 Mar 2024
Reimagining Anomalies: What If Anomalies Were Normal?
Reimagining Anomalies: What If Anomalies Were Normal?
Philipp Liznerski
Saurabh Varshneya
Ece Calikus
Sophie Fellenz
Matthias Kirchler
166
4
0
22 Feb 2024
OpenHEXAI: An Open-Source Framework for Human-Centered Evaluation of
  Explainable Machine Learning
OpenHEXAI: An Open-Source Framework for Human-Centered Evaluation of Explainable Machine Learning
Jiaqi Ma
Vivian Lai
Yiming Zhang
Chacha Chen
Paul Hamilton
Davor Ljubenkov
Himabindu Lakkaraju
Chenhao Tan
ELM
140
3
0
20 Feb 2024
Explaining Time Series via Contrastive and Locally Sparse Perturbations
Explaining Time Series via Contrastive and Locally Sparse PerturbationsInternational Conference on Learning Representations (ICLR), 2024
Zichuan Liu
Yingying Zhang
Tianchun Wang
Zefan Wang
Dongsheng Luo
...
Min Wu
Yi Wang
Chunlin Chen
Lunting Fan
Qingsong Wen
306
19
0
16 Jan 2024
Decoding AI's Nudge: A Unified Framework to Predict Human Behavior in
  AI-assisted Decision Making
Decoding AI's Nudge: A Unified Framework to Predict Human Behavior in AI-assisted Decision MakingAAAI Conference on Artificial Intelligence (AAAI), 2024
Zhuoyan Li
Zhuoran Lu
Ming Yin
150
21
0
11 Jan 2024
ALMANACS: A Simulatability Benchmark for Language Model Explainability
ALMANACS: A Simulatability Benchmark for Language Model Explainability
Edmund Mills
Shiye Su
Stuart J. Russell
Scott Emmons
449
9
0
20 Dec 2023
Error Discovery by Clustering Influence Embeddings
Error Discovery by Clustering Influence EmbeddingsNeural Information Processing Systems (NeurIPS), 2023
Fulton Wang
Julius Adebayo
Sarah Tan
Diego Garcia-Olano
Narine Kokhlikyan
270
5
0
07 Dec 2023
Understanding Parameter Saliency via Extreme Value Theory
Understanding Parameter Saliency via Extreme Value Theory
Shuo Wang
Issei Sato
AAMLFAtt
163
0
0
27 Oct 2023
Predictability and Comprehensibility in Post-Hoc XAI Methods: A
  User-Centered Analysis
Predictability and Comprehensibility in Post-Hoc XAI Methods: A User-Centered Analysis
Anahid N. Jalali
Bernhard Haslhofer
Simone Kriglstein
Andreas Rauber
FAtt
239
6
0
21 Sep 2023
TExplain: Explaining Learned Visual Features via Pre-trained (Frozen)
  Language Models
TExplain: Explaining Learned Visual Features via Pre-trained (Frozen) Language Models
Saeid Asgari Taghanaki
Aliasghar Khani
Ali Saheb Pasand
Amir Khasahmadi
Aditya Sanghi
K. Willis
Ali Mahdavi-Amiri
FAttVLM
119
0
0
01 Sep 2023
FINER: Enhancing State-of-the-art Classifiers with Feature Attribution
  to Facilitate Security Analysis
FINER: Enhancing State-of-the-art Classifiers with Feature Attribution to Facilitate Security AnalysisConference on Computer and Communications Security (CCS), 2023
Yiling He
Jian Lou
Zhan Qin
Kui Ren
FAttAAML
152
14
0
10 Aug 2023
Understanding the Effect of Counterfactual Explanations on Trust and
  Reliance on AI for Human-AI Collaborative Clinical Decision Making
Understanding the Effect of Counterfactual Explanations on Trust and Reliance on AI for Human-AI Collaborative Clinical Decision Making
Min Hun Lee
Chong Jun Chew
187
51
0
08 Aug 2023
Interpretable Sparsification of Brain Graphs: Better Practices and
  Effective Designs for Graph Neural Networks
Interpretable Sparsification of Brain Graphs: Better Practices and Effective Designs for Graph Neural NetworksKnowledge Discovery and Data Mining (KDD), 2023
Gao Li
M. Duda
Xinming Zhang
Danai Koutra
Yujun Yan
186
15
0
26 Jun 2023
Towards Robust Aspect-based Sentiment Analysis through
  Non-counterfactual Augmentations
Towards Robust Aspect-based Sentiment Analysis through Non-counterfactual Augmentations
Xinyu Liu
Yanl Ding
Kaikai An
Chunyang Xiao
Pranava Madhyastha
Tong Xiao
Jingbo Zhu
125
2
0
24 Jun 2023
In Search of Verifiability: Explanations Rarely Enable Complementary
  Performance in AI-Advised Decision Making
In Search of Verifiability: Explanations Rarely Enable Complementary Performance in AI-Advised Decision MakingThe AI Magazine (AI Mag.), 2023
Raymond Fok
Daniel S. Weld
293
82
0
12 May 2023
Multimodal Understanding Through Correlation Maximization and
  Minimization
Multimodal Understanding Through Correlation Maximization and Minimization
Yi Shi
Marc Niethammer
150
1
0
04 May 2023
Explainability in AI Policies: A Critical Review of Communications,
  Reports, Regulations, and Standards in the EU, US, and UK
Explainability in AI Policies: A Critical Review of Communications, Reports, Regulations, and Standards in the EU, US, and UKConference on Fairness, Accountability and Transparency (FAccT), 2023
L. Nannini
Agathe Balayn
A. Smith
230
50
0
20 Apr 2023
Performance of GAN-based augmentation for deep learning COVID-19 image
  classification
Performance of GAN-based augmentation for deep learning COVID-19 image classification
Oleksandr Fedoruk
Konrad Klimaszewski
Aleksander Ogonowski
Rafal Mo.zd.zonek
OODMedIm
152
15
0
18 Apr 2023
How good Neural Networks interpretation methods really are? A
  quantitative benchmark
How good Neural Networks interpretation methods really are? A quantitative benchmark
Antoine Passemiers
Pietro Folco
D. Raimondi
G. Birolo
Yves Moreau
P. Fariselli
FAtt
108
2
0
05 Apr 2023
Model-agnostic explainable artificial intelligence for object detection
  in image data
Model-agnostic explainable artificial intelligence for object detection in image dataEngineering applications of artificial intelligence (Eng. Appl. Artif. Intell.), 2023
M. Moradi
Ke Yan
David Colwell
Matthias Samwald
Rhona Asgari
AAML
178
10
0
30 Mar 2023
IRIS: Interpretable Rubric-Informed Segmentation for Action Quality
  Assessment
IRIS: Interpretable Rubric-Informed Segmentation for Action Quality AssessmentInternational Conference on Intelligent User Interfaces (IUI), 2023
Hitoshi Matsuyama
Nobuo Kawaguchi
Brian Y. Lim
112
9
0
16 Mar 2023
The Generalizability of Explanations
The Generalizability of ExplanationsIEEE International Joint Conference on Neural Network (IJCNN), 2023
Hanxiao Tan
FAtt
121
1
0
23 Feb 2023
Understanding User Preferences in Explainable Artificial Intelligence: A
  Survey and a Mapping Function Proposal
Understanding User Preferences in Explainable Artificial Intelligence: A Survey and a Mapping Function ProposalACM Transactions on Intelligent Systems and Technology (ACM TIST), 2023
M. Hashemi
Ali Darejeh
Francisco Cruz
301
4
0
07 Feb 2023
Charting the Sociotechnical Gap in Explainable AI: A Framework to
  Address the Gap in XAI
Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI
Upol Ehsan
Koustuv Saha
M. D. Choudhury
Mark O. Riedl
216
72
0
01 Feb 2023
Explainable Deep Reinforcement Learning: State of the Art and Challenges
Explainable Deep Reinforcement Learning: State of the Art and ChallengesACM Computing Surveys (ACM CSUR), 2022
G. Vouros
XAI
317
116
0
24 Jan 2023
On the Relationship Between Explanation and Prediction: A Causal View
On the Relationship Between Explanation and Prediction: A Causal ViewInternational Conference on Machine Learning (ICML), 2022
Amir-Hossein Karimi
Krikamol Muandet
Simon Kornblith
Bernhard Schölkopf
Been Kim
FAttCML
315
16
0
13 Dec 2022
Post hoc Explanations may be Ineffective for Detecting Unknown Spurious
  Correlation
Post hoc Explanations may be Ineffective for Detecting Unknown Spurious CorrelationInternational Conference on Learning Representations (ICLR), 2022
Julius Adebayo
M. Muelly
H. Abelson
Been Kim
197
93
0
09 Dec 2022
A Rigorous Study Of The Deep Taylor Decomposition
A Rigorous Study Of The Deep Taylor Decomposition
Leon Sixt
Tim Landgraf
FAttAAML
117
7
0
14 Nov 2022
Towards Human-centered Explainable AI: A Survey of User Studies for
  Model Explanations
Towards Human-centered Explainable AI: A Survey of User Studies for Model ExplanationsIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022
Yao Rong
Tobias Leemann
Thai-trang Nguyen
Lisa Fiedler
Peizhu Qian
Vaibhav Unhelkar
Tina Seidel
Gjergji Kasneci
Enkelejda Kasneci
ELM
250
153
0
20 Oct 2022
12
Next