ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.00772
  4. Cited By
Evaluating Saliency Map Explanations for Convolutional Neural Networks:
  A User Study

Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study

International Conference on Intelligent User Interfaces (IUI), 2020
3 February 2020
Ahmed Alqaraawi
M. Schuessler
Philipp Weiß
Enrico Costanza
N. Bianchi-Berthouze
    AAMLFAttXAI
ArXiv (abs)PDFHTML

Papers citing "Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study"

40 / 90 papers shown
Title
Assessing Out-of-Domain Language Model Performance from Few Examples
Assessing Out-of-Domain Language Model Performance from Few ExamplesConference of the European Chapter of the Association for Computational Linguistics (EACL), 2022
Prasann Singhal
Jarad Forristal
Xi Ye
Greg Durrett
LRM
182
6
0
13 Oct 2022
Responsibility: An Example-based Explainable AI approach via Training
  Process Inspection
Responsibility: An Example-based Explainable AI approach via Training Process Inspection
Faraz Khadivpour
Arghasree Banerjee
Matthew J. Guzdial
XAI
153
2
0
07 Sep 2022
Monitoring Shortcut Learning using Mutual Information
Monitoring Shortcut Learning using Mutual Information
Mohammed Adnan
Yani Andrew Ioannou
Chuan-Yung Tsai
A. Galloway
H. R. Tizhoosh
Graham W. Taylor
106
7
0
27 Jun 2022
Comparison of attention models and post-hoc explanation methods for
  embryo stage identification: a case study
Comparison of attention models and post-hoc explanation methods for embryo stage identification: a case study
T. Gomez
Thomas Fréour
Harold Mouchère
148
3
0
13 May 2022
Towards a multi-stakeholder value-based assessment framework for
  algorithmic systems
Towards a multi-stakeholder value-based assessment framework for algorithmic systemsConference on Fairness, Accountability and Transparency (FAccT), 2022
Mireia Yurrita
Dave Murray-Rust
Agathe Balayn
A. Bozzon
MLAU
200
36
0
09 May 2022
Do Users Benefit From Interpretable Vision? A User Study, Baseline, And
  Dataset
Do Users Benefit From Interpretable Vision? A User Study, Baseline, And DatasetInternational Conference on Learning Representations (ICLR), 2022
Leon Sixt
M. Schuessler
Oana-Iuliana Popescu
Philipp Weiß
Tim Landgraf
FAtt
189
18
0
25 Apr 2022
Perception Visualization: Seeing Through the Eyes of a DNN
Perception Visualization: Seeing Through the Eyes of a DNNBritish Machine Vision Conference (BMVC), 2022
Loris Giulivi
Mark J. Carman
Giacomo Boracchi
126
6
0
21 Apr 2022
Explainable Predictive Process Monitoring: A User Evaluation
Explainable Predictive Process Monitoring: A User Evaluation
Williams Rizzi
M. Comuzzi
Chiara Di Francescomarino
Chiara Ghidini
Suhwan Lee
F. Maggi
Alexander Nolte
FaMLXAI
199
14
0
15 Feb 2022
Machine Explanations and Human Understanding
Machine Explanations and Human Understanding
Chacha Chen
Shi Feng
Amit Sharma
Chenhao Tan
298
31
0
08 Feb 2022
Algorithmic nudge to make better choices: Evaluating effectiveness of
  XAI frameworks to reveal biases in algorithmic decision making to users
Algorithmic nudge to make better choices: Evaluating effectiveness of XAI frameworks to reveal biases in algorithmic decision making to users
Prerna Juneja
Tanushree Mitra
CML
80
0
0
05 Feb 2022
Metrics for saliency map evaluation of deep learning explanation methods
Metrics for saliency map evaluation of deep learning explanation methodsInternational Conferences on Pattern Recognition and Artificial Intelligence (ICCPRAI), 2022
T. Gomez
Thomas Fréour
Harold Mouchère
XAIFAtt
271
51
0
31 Jan 2022
Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
165
1
0
30 Jan 2022
Towards a Science of Human-AI Decision Making: A Survey of Empirical
  Studies
Towards a Science of Human-AI Decision Making: A Survey of Empirical Studies
Vivian Lai
Chacha Chen
Q. V. Liao
Alison Smith-Renner
Chenhao Tan
249
208
0
21 Dec 2021
Explainability Pitfalls: Beyond Dark Patterns in Explainable AI
Explainability Pitfalls: Beyond Dark Patterns in Explainable AI
Upol Ehsan
Mark O. Riedl
XAISILM
151
77
0
26 Sep 2021
Enhancing Model Assessment in Vision-based Interactive Machine Teaching
  through Real-time Saliency Map Visualization
Enhancing Model Assessment in Vision-based Interactive Machine Teaching through Real-time Saliency Map VisualizationACM Symposium on User Interface Software and Technology (UIST), 2021
Zhongyi Zhou
Koji Yatani
FAtt
60
5
0
26 Aug 2021
Where do Models go Wrong? Parameter-Space Saliency Maps for
  Explainability
Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
Roman Levin
Manli Shu
Eitan Borgnia
Furong Huang
Micah Goldblum
Tom Goldstein
FAttAAML
87
12
0
03 Aug 2021
Temporal Dependencies in Feature Importance for Time Series Predictions
Temporal Dependencies in Feature Importance for Time Series PredictionsInternational Conference on Learning Representations (ICLR), 2021
Kin Kwan Leung
Clayton Rooke
Jonathan Smith
S. Zuberi
Anthony L. Caterini
OODAI4TS
177
35
0
29 Jul 2021
The Who in XAI: How AI Background Shapes Perceptions of AI Explanations
The Who in XAI: How AI Background Shapes Perceptions of AI ExplanationsInternational Conference on Human Factors in Computing Systems (CHI), 2021
Upol Ehsan
Samir Passi
Q. V. Liao
Larry Chan
I-Hsiang Lee
Michael J. Muller
Mark O. Riedl
201
108
0
28 Jul 2021
Responsible and Regulatory Conform Machine Learning for Medicine: A
  Survey of Challenges and Solutions
Responsible and Regulatory Conform Machine Learning for Medicine: A Survey of Challenges and SolutionsIEEE Access (IEEE Access), 2021
Eike Petersen
Yannik Potdevin
Esfandiar Mohammadi
Stephan Zidowitz
Sabrina Breyer
...
Sandra Henn
Ludwig Pechmann
M. Leucker
P. Rostalski
Christian Herzog
FaMLAILawOOD
217
34
0
20 Jul 2021
Roadmap of Designing Cognitive Metrics for Explainable Artificial
  Intelligence (XAI)
Roadmap of Designing Cognitive Metrics for Explainable Artificial Intelligence (XAI)
J. H. Hsiao
H. Ngai
Luyu Qiu
Yi Yang
Caleb Chen Cao
XAI
126
32
0
20 Jul 2021
Challenges for machine learning in clinical translation of big data
  imaging studies
Challenges for machine learning in clinical translation of big data imaging studies
Nicola K. Dinsdale
Emma Bluemke
V. Sundaresan
M. Jenkinson
Stephen Smith
Ana I. L. Namburete
AI4CE
183
61
0
07 Jul 2021
Evaluation of Saliency-based Explainability Method
Evaluation of Saliency-based Explainability Method
Sam Zabdiel Sunder Samuel
V. Kamakshi
Namrata Lodhi
N. C. Krishnan
FAttXAI
147
16
0
24 Jun 2021
Meaningfully Debugging Model Mistakes using Conceptual Counterfactual
  Explanations
Meaningfully Debugging Model Mistakes using Conceptual Counterfactual Explanations
Abubakar Abid
Mert Yuksekgonul
James Zou
CML
176
72
0
24 Jun 2021
The effectiveness of feature attribution methods and its correlation
  with automatic evaluation scores
The effectiveness of feature attribution methods and its correlation with automatic evaluation scoresNeural Information Processing Systems (NeurIPS), 2021
Giang Nguyen
Daeyoung Kim
Anh Totti Nguyen
FAtt
339
101
0
31 May 2021
Two4Two: Evaluating Interpretable Machine Learning - A Synthetic Dataset
  For Controlled Experiments
Two4Two: Evaluating Interpretable Machine Learning - A Synthetic Dataset For Controlled Experiments
M. Schuessler
Philipp Weiß
Leon Sixt
127
3
0
06 May 2021
Automatic Diagnosis of COVID-19 from CT Images using CycleGAN and
  Transfer Learning
Automatic Diagnosis of COVID-19 from CT Images using CycleGAN and Transfer LearningApplied Soft Computing (Appl Soft Comput), 2021
Navid Ghassemi
A. Shoeibi
Marjane Khodatars
Jónathan Heras
Alireza Rahimi
A. Zare
R. B. Pachori
Juan M Gorriz
MedIm
162
67
0
24 Apr 2021
LEx: A Framework for Operationalising Layers of Machine Learning
  Explanations
LEx: A Framework for Operationalising Layers of Machine Learning Explanations
Ronal Singh
Upol Ehsan
M. Cheong
Mark O. Riedl
Tim Miller
96
5
0
15 Apr 2021
Question-Driven Design Process for Explainable AI User Experiences
Question-Driven Design Process for Explainable AI User Experiences
Q. V. Liao
Milena Pribić
Jaesik Han
Sarah Miller
Daby M. Sow
254
63
0
08 Apr 2021
Expanding Explainability: Towards Social Transparency in AI systems
Expanding Explainability: Towards Social Transparency in AI systemsInternational Conference on Human Factors in Computing Systems (CHI), 2021
Upol Ehsan
Q. V. Liao
Michael J. Muller
Mark O. Riedl
Justin D. Weisz
224
471
0
12 Jan 2021
GANterfactual - Counterfactual Explanations for Medical Non-Experts
  using Generative Adversarial Learning
GANterfactual - Counterfactual Explanations for Medical Non-Experts using Generative Adversarial LearningFrontiers in Artificial Intelligence (FAI), 2020
Silvan Mertes
Tobias Huber
Katharina Weitz
Alexander Heimerl
Elisabeth André
GANAAMLMedIm
284
100
0
22 Dec 2020
Debiased-CAM to mitigate image perturbations with faithful visual
  explanations of machine learning
Debiased-CAM to mitigate image perturbations with faithful visual explanations of machine learningInternational Conference on Human Factors in Computing Systems (CHI), 2020
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
275
19
0
10 Dec 2020
Representaciones del aprendizaje reutilizando los gradientes de la
  retropropagacion
Representaciones del aprendizaje reutilizando los gradientes de la retropropagacion
Roberto Reyes-Ochoa
Servando Lopez-Aguayo
FAtt
87
0
0
06 Dec 2020
Debugging Tests for Model Explanations
Debugging Tests for Model Explanations
Julius Adebayo
M. Muelly
Ilaria Liccardi
Been Kim
FAtt
254
197
0
10 Nov 2020
Towards falsifiable interpretability research
Towards falsifiable interpretability research
Matthew L. Leavitt
Ari S. Morcos
AAMLAI4CE
156
73
0
22 Oct 2020
Measure Utility, Gain Trust: Practical Advice for XAI Researcher
Measure Utility, Gain Trust: Practical Advice for XAI Researcher
B. Pierson
M. Glenski
William I. N. Sealy
Dustin L. Arendt
135
29
0
27 Sep 2020
Local and Global Explanations of Agent Behavior: Integrating Strategy
  Summaries with Saliency Maps
Local and Global Explanations of Agent Behavior: Integrating Strategy Summaries with Saliency Maps
Tobias Huber
Katharina Weitz
Elisabeth André
Ofra Amir
FAtt
280
70
0
18 May 2020
Explainable Goal-Driven Agents and Robots -- A Comprehensive Review
Explainable Goal-Driven Agents and Robots -- A Comprehensive ReviewACM Computing Surveys (ACM CSUR), 2020
F. Sado
C. K. Loo
W. S. Liew
Matthias Kerzel
S. Wermter
307
68
0
21 Apr 2020
How to Support Users in Understanding Intelligent Systems? Structuring
  the Discussion
How to Support Users in Understanding Intelligent Systems? Structuring the DiscussionInternational Conference on Intelligent User Interfaces (IUI), 2020
Malin Eiband
Daniel Buschek
H. Hussmann
221
30
0
22 Jan 2020
Secure and Robust Machine Learning for Healthcare: A Survey
Secure and Robust Machine Learning for Healthcare: A SurveyIEEE Reviews in Biomedical Engineering (RBME), 2020
A. Qayyum
Junaid Qadir
Muhammad Bilal
Ala I. Al-Fuqaha
AAMLOOD
208
434
0
21 Jan 2020
When Explanations Lie: Why Many Modified BP Attributions Fail
When Explanations Lie: Why Many Modified BP Attributions FailInternational Conference on Machine Learning (ICML), 2019
Leon Sixt
Maximilian Granz
Tim Landgraf
BDLFAttXAI
476
141
0
20 Dec 2019
Previous
12