ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.00772
  4. Cited By
Evaluating Saliency Map Explanations for Convolutional Neural Networks:
  A User Study

Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study

3 February 2020
Ahmed Alqaraawi
M. Schuessler
Philipp Weiß
Enrico Costanza
N. Bianchi-Berthouze
    AAML
    FAtt
    XAI
ArXivPDFHTML

Papers citing "Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study"

37 / 87 papers shown
Title
Towards a multi-stakeholder value-based assessment framework for
  algorithmic systems
Towards a multi-stakeholder value-based assessment framework for algorithmic systems
Mireia Yurrita
Dave Murray-Rust
Agathe Balayn
A. Bozzon
MLAU
21
29
0
09 May 2022
Do Users Benefit From Interpretable Vision? A User Study, Baseline, And
  Dataset
Do Users Benefit From Interpretable Vision? A User Study, Baseline, And Dataset
Leon Sixt
M. Schuessler
Oana-Iuliana Popescu
Philipp Weiß
Tim Landgraf
FAtt
24
14
0
25 Apr 2022
Perception Visualization: Seeing Through the Eyes of a DNN
Perception Visualization: Seeing Through the Eyes of a DNN
Loris Giulivi
Mark J. Carman
Giacomo Boracchi
8
6
0
21 Apr 2022
Explainable Predictive Process Monitoring: A User Evaluation
Explainable Predictive Process Monitoring: A User Evaluation
Williams Rizzi
M. Comuzzi
Chiara Di Francescomarino
Chiara Ghidini
Suhwan Lee
F. Maggi
Alexander Nolte
FaML
XAI
8
8
0
15 Feb 2022
Machine Explanations and Human Understanding
Machine Explanations and Human Understanding
Chacha Chen
Shi Feng
Amit Sharma
Chenhao Tan
19
24
0
08 Feb 2022
Algorithmic nudge to make better choices: Evaluating effectiveness of
  XAI frameworks to reveal biases in algorithmic decision making to users
Algorithmic nudge to make better choices: Evaluating effectiveness of XAI frameworks to reveal biases in algorithmic decision making to users
Prerna Juneja
Tanushree Mitra
CML
19
0
0
05 Feb 2022
Metrics for saliency map evaluation of deep learning explanation methods
Metrics for saliency map evaluation of deep learning explanation methods
T. Gomez
Thomas Fréour
Harold Mouchère
XAI
FAtt
64
41
0
31 Jan 2022
Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
17
1
0
30 Jan 2022
Towards a Science of Human-AI Decision Making: A Survey of Empirical
  Studies
Towards a Science of Human-AI Decision Making: A Survey of Empirical Studies
Vivian Lai
Chacha Chen
Q. V. Liao
Alison Smith-Renner
Chenhao Tan
18
186
0
21 Dec 2021
Explainability Pitfalls: Beyond Dark Patterns in Explainable AI
Explainability Pitfalls: Beyond Dark Patterns in Explainable AI
Upol Ehsan
Mark O. Riedl
XAI
SILM
51
57
0
26 Sep 2021
Enhancing Model Assessment in Vision-based Interactive Machine Teaching
  through Real-time Saliency Map Visualization
Enhancing Model Assessment in Vision-based Interactive Machine Teaching through Real-time Saliency Map Visualization
Zhongyi Zhou
Koji Yatani
FAtt
11
3
0
26 Aug 2021
Where do Models go Wrong? Parameter-Space Saliency Maps for
  Explainability
Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
Roman Levin
Manli Shu
Eitan Borgnia
Furong Huang
Micah Goldblum
Tom Goldstein
FAtt
AAML
14
10
0
03 Aug 2021
Temporal Dependencies in Feature Importance for Time Series Predictions
Temporal Dependencies in Feature Importance for Time Series Predictions
Kin Kwan Leung
Clayton Rooke
Jonathan Smith
S. Zuberi
M. Volkovs
OOD
AI4TS
18
23
0
29 Jul 2021
The Who in XAI: How AI Background Shapes Perceptions of AI Explanations
The Who in XAI: How AI Background Shapes Perceptions of AI Explanations
Upol Ehsan
Samir Passi
Q. V. Liao
Larry Chan
I-Hsiang Lee
Michael J. Muller
Mark O. Riedl
27
85
0
28 Jul 2021
Responsible and Regulatory Conform Machine Learning for Medicine: A
  Survey of Challenges and Solutions
Responsible and Regulatory Conform Machine Learning for Medicine: A Survey of Challenges and Solutions
Eike Petersen
Yannik Potdevin
Esfandiar Mohammadi
Stephan Zidowitz
Sabrina Breyer
...
Sandra Henn
Ludwig Pechmann
M. Leucker
P. Rostalski
Christian Herzog
FaML
AILaw
OOD
19
21
0
20 Jul 2021
Roadmap of Designing Cognitive Metrics for Explainable Artificial
  Intelligence (XAI)
Roadmap of Designing Cognitive Metrics for Explainable Artificial Intelligence (XAI)
J. H. Hsiao
H. Ngai
Luyu Qiu
Yi Yang
Caleb Chen Cao
XAI
28
27
0
20 Jul 2021
Challenges for machine learning in clinical translation of big data
  imaging studies
Challenges for machine learning in clinical translation of big data imaging studies
Nicola K. Dinsdale
Emma Bluemke
V. Sundaresan
M. Jenkinson
Stephen Smith
Ana I. L. Namburete
AI4CE
32
41
0
07 Jul 2021
Evaluation of Saliency-based Explainability Method
Evaluation of Saliency-based Explainability Method
Sam Zabdiel Sunder Samuel
V. Kamakshi
Namrata Lodhi
N. C. Krishnan
FAtt
XAI
21
12
0
24 Jun 2021
Meaningfully Debugging Model Mistakes using Conceptual Counterfactual
  Explanations
Meaningfully Debugging Model Mistakes using Conceptual Counterfactual Explanations
Abubakar Abid
Mert Yuksekgonul
James Y. Zou
CML
21
64
0
24 Jun 2021
The effectiveness of feature attribution methods and its correlation
  with automatic evaluation scores
The effectiveness of feature attribution methods and its correlation with automatic evaluation scores
Giang Nguyen
Daeyoung Kim
Anh Totti Nguyen
FAtt
8
86
0
31 May 2021
Two4Two: Evaluating Interpretable Machine Learning - A Synthetic Dataset
  For Controlled Experiments
Two4Two: Evaluating Interpretable Machine Learning - A Synthetic Dataset For Controlled Experiments
M. Schuessler
Philipp Weiß
Leon Sixt
24
3
0
06 May 2021
Automatic Diagnosis of COVID-19 from CT Images using CycleGAN and
  Transfer Learning
Automatic Diagnosis of COVID-19 from CT Images using CycleGAN and Transfer Learning
Navid Ghassemi
A. Shoeibi
Marjane Khodatars
Jónathan Heras
Alireza Rahimi
A. Zare
R. B. Pachori
Juan M Gorriz
MedIm
35
56
0
24 Apr 2021
LEx: A Framework for Operationalising Layers of Machine Learning
  Explanations
LEx: A Framework for Operationalising Layers of Machine Learning Explanations
Ronal Singh
Upol Ehsan
M. Cheong
Mark O. Riedl
Tim Miller
21
3
0
15 Apr 2021
Question-Driven Design Process for Explainable AI User Experiences
Question-Driven Design Process for Explainable AI User Experiences
Q. V. Liao
Milena Pribić
Jaesik Han
Sarah Miller
Daby M. Sow
15
52
0
08 Apr 2021
Expanding Explainability: Towards Social Transparency in AI systems
Expanding Explainability: Towards Social Transparency in AI systems
Upol Ehsan
Q. V. Liao
Michael J. Muller
Mark O. Riedl
Justin D. Weisz
43
390
0
12 Jan 2021
GANterfactual - Counterfactual Explanations for Medical Non-Experts
  using Generative Adversarial Learning
GANterfactual - Counterfactual Explanations for Medical Non-Experts using Generative Adversarial Learning
Silvan Mertes
Tobias Huber
Katharina Weitz
Alexander Heimerl
Elisabeth André
GAN
AAML
MedIm
26
69
0
22 Dec 2020
Debiased-CAM to mitigate image perturbations with faithful visual
  explanations of machine learning
Debiased-CAM to mitigate image perturbations with faithful visual explanations of machine learning
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
13
18
0
10 Dec 2020
Representaciones del aprendizaje reutilizando los gradientes de la
  retropropagacion
Representaciones del aprendizaje reutilizando los gradientes de la retropropagacion
Roberto Reyes-Ochoa
Servando Lopez-Aguayo
FAtt
10
0
0
06 Dec 2020
Debugging Tests for Model Explanations
Debugging Tests for Model Explanations
Julius Adebayo
M. Muelly
Ilaria Liccardi
Been Kim
FAtt
6
177
0
10 Nov 2020
Towards falsifiable interpretability research
Towards falsifiable interpretability research
Matthew L. Leavitt
Ari S. Morcos
AAML
AI4CE
8
67
0
22 Oct 2020
Measure Utility, Gain Trust: Practical Advice for XAI Researcher
Measure Utility, Gain Trust: Practical Advice for XAI Researcher
B. Pierson
M. Glenski
William I. N. Sealy
Dustin L. Arendt
8
28
0
27 Sep 2020
Local and Global Explanations of Agent Behavior: Integrating Strategy
  Summaries with Saliency Maps
Local and Global Explanations of Agent Behavior: Integrating Strategy Summaries with Saliency Maps
Tobias Huber
Katharina Weitz
Elisabeth André
Ofra Amir
FAtt
11
63
0
18 May 2020
Explainable Goal-Driven Agents and Robots -- A Comprehensive Review
Explainable Goal-Driven Agents and Robots -- A Comprehensive Review
F. Sado
C. K. Loo
W. S. Liew
Matthias Kerzel
S. Wermter
11
48
0
21 Apr 2020
How to Support Users in Understanding Intelligent Systems? Structuring
  the Discussion
How to Support Users in Understanding Intelligent Systems? Structuring the Discussion
Malin Eiband
Daniel Buschek
H. Hussmann
37
28
0
22 Jan 2020
Secure and Robust Machine Learning for Healthcare: A Survey
Secure and Robust Machine Learning for Healthcare: A Survey
A. Qayyum
Junaid Qadir
Muhammad Bilal
Ala I. Al-Fuqaha
AAML
OOD
31
374
0
21 Jan 2020
When Explanations Lie: Why Many Modified BP Attributions Fail
When Explanations Lie: Why Many Modified BP Attributions Fail
Leon Sixt
Maximilian Granz
Tim Landgraf
BDL
FAtt
XAI
11
132
0
20 Dec 2019
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
225
3,681
0
28 Feb 2017
Previous
12