ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.00682
  4. Cited By
How do Humans Understand Explanations from Machine Learning Systems? An
  Evaluation of the Human-Interpretability of Explanation

How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation

2 February 2018
Menaka Narayanan
Emily Chen
Jeffrey He
Been Kim
S. Gershman
Finale Doshi-Velez
    FAtt
    XAI
ArXivPDFHTML

Papers citing "How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation"

50 / 105 papers shown
Title
Crowdsourcing Evaluation of Saliency-based XAI Methods
Crowdsourcing Evaluation of Saliency-based XAI Methods
Xiaotian Lu
A. Tolmachev
Tatsuya Yamamoto
Koh Takeuchi
Seiji Okajima
T. Takebayashi
Koji Maruhashi
H. Kashima
XAI
FAtt
8
14
0
27 Jun 2021
Explainable AI for medical imaging: Explaining pneumothorax diagnoses
  with Bayesian Teaching
Explainable AI for medical imaging: Explaining pneumothorax diagnoses with Bayesian Teaching
Tomas Folke
Scott Cheng-Hsin Yang
S. Anderson
Patrick Shafto
16
19
0
08 Jun 2021
A Review on Explainability in Multimodal Deep Neural Nets
A Review on Explainability in Multimodal Deep Neural Nets
Gargi Joshi
Rahee Walambe
K. Kotecha
29
139
0
17 May 2021
Explanation-Based Human Debugging of NLP Models: A Survey
Explanation-Based Human Debugging of NLP Models: A Survey
Piyawat Lertvittayakumjorn
Francesca Toni
LRM
42
79
0
30 Apr 2021
Interpretable Deep Learning: Interpretation, Interpretability,
  Trustworthiness, and Beyond
Interpretable Deep Learning: Interpretation, Interpretability, Trustworthiness, and Beyond
Xuhong Li
Haoyi Xiong
Xingjian Li
Xuanyu Wu
Xiao Zhang
Ji Liu
Jiang Bian
Dejing Dou
AAML
FaML
XAI
HAI
23
317
0
19 Mar 2021
Robust Model Compression Using Deep Hypotheses
Robust Model Compression Using Deep Hypotheses
Omri Armstrong
Ran Gilad-Bachrach
OOD
13
2
0
13 Mar 2021
Towards Unbiased and Accurate Deferral to Multiple Experts
Towards Unbiased and Accurate Deferral to Multiple Experts
Vijay Keswani
Matthew Lease
K. Kenthapadi
FaML
8
67
0
25 Feb 2021
EUCA: the End-User-Centered Explainable AI Framework
EUCA: the End-User-Centered Explainable AI Framework
Weina Jin
Jianyu Fan
D. Gromala
Philippe Pasquier
Ghassan Hamarneh
40
24
0
04 Feb 2021
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders
  of Interpretable Machine Learning and their Needs
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs
Harini Suresh
Steven R. Gomez
K. Nam
Arvind Satyanarayan
34
126
0
24 Jan 2021
How can I choose an explainer? An Application-grounded Evaluation of
  Post-hoc Explanations
How can I choose an explainer? An Application-grounded Evaluation of Post-hoc Explanations
Sérgio Jesus
Catarina Belém
Vladimir Balayan
João Bento
Pedro Saleiro
P. Bizarro
João Gama
136
120
0
21 Jan 2021
Biased Models Have Biased Explanations
Biased Models Have Biased Explanations
Aditya Jain
Manish Ravula
Joydeep Ghosh
FaML
8
19
0
20 Dec 2020
Debiased-CAM to mitigate image perturbations with faithful visual
  explanations of machine learning
Debiased-CAM to mitigate image perturbations with faithful visual explanations of machine learning
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
24
18
0
10 Dec 2020
A Survey on the Explainability of Supervised Machine Learning
A Survey on the Explainability of Supervised Machine Learning
Nadia Burkart
Marco F. Huber
FaML
XAI
25
752
0
16 Nov 2020
Uncertainty as a Form of Transparency: Measuring, Communicating, and
  Using Uncertainty
Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty
Umang Bhatt
Javier Antorán
Yunfeng Zhang
Q. V. Liao
P. Sattigeri
...
L. Nachman
R. Chunara
Madhulika Srikumar
Adrian Weller
Alice Xiang
22
247
0
15 Nov 2020
Domain-Level Explainability -- A Challenge for Creating Trust in
  Superhuman AI Strategies
Domain-Level Explainability -- A Challenge for Creating Trust in Superhuman AI Strategies
Jonas Andrulis
Ole Meyer
Grégory Schott
Samuel Weinbach
V. Gruhn
6
4
0
12 Nov 2020
Explainable Automated Fact-Checking: A Survey
Explainable Automated Fact-Checking: A Survey
Neema Kotonya
Francesca Toni
6
112
0
07 Nov 2020
Understanding Information Processing in Human Brain by Interpreting
  Machine Learning Models
Understanding Information Processing in Human Brain by Interpreting Machine Learning Models
Ilya Kuzovkin
HAI
8
2
0
17 Oct 2020
Explaining Creative Artifacts
Explaining Creative Artifacts
L. Varshney
Nazneen Rajani
R. Socher
14
2
0
14 Oct 2020
A Diagnostic Study of Explainability Techniques for Text Classification
A Diagnostic Study of Explainability Techniques for Text Classification
Pepa Atanasova
J. Simonsen
Christina Lioma
Isabelle Augenstein
XAI
FAtt
17
219
0
25 Sep 2020
How Good is your Explanation? Algorithmic Stability Measures to Assess
  the Quality of Explanations for Deep Neural Networks
How Good is your Explanation? Algorithmic Stability Measures to Assess the Quality of Explanations for Deep Neural Networks
Thomas Fel
David Vigouroux
Rémi Cadène
Thomas Serre
XAI
FAtt
31
31
0
07 Sep 2020
Are Visual Explanations Useful? A Case Study in Model-in-the-Loop
  Prediction
Are Visual Explanations Useful? A Case Study in Model-in-the-Loop Prediction
Eric Chu
D. Roy
Jacob Andreas
FAtt
LRM
10
71
0
23 Jul 2020
The Impact of Explanations on AI Competency Prediction in VQA
The Impact of Explanations on AI Competency Prediction in VQA
Kamran Alipour
Arijit Ray
Xiaoyu Lin
J. Schulze
Yi Yao
Giedrius Burachas
24
9
0
02 Jul 2020
Does the Whole Exceed its Parts? The Effect of AI Explanations on
  Complementary Team Performance
Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance
Gagan Bansal
Tongshuang Wu
Joyce Zhou
Raymond Fok
Besmira Nushi
Ece Kamar
Marco Tulio Ribeiro
Daniel S. Weld
39
581
0
26 Jun 2020
Misplaced Trust: Measuring the Interference of Machine Learning in Human
  Decision-Making
Misplaced Trust: Measuring the Interference of Machine Learning in Human Decision-Making
Harini Suresh
Natalie Lao
Ilaria Liccardi
13
49
0
22 May 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
41
371
0
30 Apr 2020
Adversarial Attacks and Defenses: An Interpretation Perspective
Adversarial Attacks and Defenses: An Interpretation Perspective
Ninghao Liu
Mengnan Du
Ruocheng Guo
Huan Liu
Xia Hu
AAML
26
8
0
23 Apr 2020
Human Factors in Model Interpretability: Industry Practices, Challenges,
  and Needs
Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs
Sungsoo Ray Hong
Jessica Hullman
E. Bertini
HAI
6
191
0
23 Apr 2020
Explaining Deep Neural Networks and Beyond: A Review of Methods and
  Applications
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
Wojciech Samek
G. Montavon
Sebastian Lapuschkin
Christopher J. Anders
K. Müller
XAI
44
82
0
17 Mar 2020
A Study on Multimodal and Interactive Explanations for Visual Question
  Answering
A Study on Multimodal and Interactive Explanations for Visual Question Answering
Kamran Alipour
J. Schulze
Yi Yao
Avi Ziskind
Giedrius Burachas
32
27
0
01 Mar 2020
What Would You Ask the Machine Learning Model? Identification of User
  Needs for Model Explanations Based on Human-Model Conversations
What Would You Ask the Machine Learning Model? Identification of User Needs for Model Explanations Based on Human-Model Conversations
Michal Kuzba
P. Biecek
HAI
15
22
0
07 Feb 2020
Evaluating Saliency Map Explanations for Convolutional Neural Networks:
  A User Study
Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study
Ahmed Alqaraawi
M. Schuessler
Philipp Weiß
Enrico Costanza
N. Bianchi-Berthouze
AAML
FAtt
XAI
27
197
0
03 Feb 2020
Explainable Active Learning (XAL): An Empirical Study of How Local
  Explanations Impact Annotator Experience
Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience
Bhavya Ghai
Q. V. Liao
Yunfeng Zhang
Rachel K. E. Bellamy
Klaus Mueller
12
29
0
24 Jan 2020
Making deep neural networks right for the right scientific reasons by
  interacting with their explanations
Making deep neural networks right for the right scientific reasons by interacting with their explanations
P. Schramowski
Wolfgang Stammer
Stefano Teso
Anna Brugger
Xiaoting Shao
Hans-Georg Luigs
Anne-Katrin Mahlein
Kristian Kersting
32
207
0
15 Jan 2020
Questioning the AI: Informing Design Practices for Explainable AI User
  Experiences
Questioning the AI: Informing Design Practices for Explainable AI User Experiences
Q. V. Liao
D. Gruen
Sarah Miller
52
702
0
08 Jan 2020
Auditing and Debugging Deep Learning Models via Decision Boundaries:
  Individual-level and Group-level Analysis
Auditing and Debugging Deep Learning Models via Decision Boundaries: Individual-level and Group-level Analysis
Roozbeh Yousefzadeh
D. O’Leary
AAML
FAtt
11
5
0
03 Jan 2020
Measuring the Quality of Explanations: The System Causability Scale
  (SCS). Comparing Human and Machine Explanations
Measuring the Quality of Explanations: The System Causability Scale (SCS). Comparing Human and Machine Explanations
Andreas Holzinger
André M. Carrington
Heimo Muller
LRM
XAI
ELM
20
301
0
19 Dec 2019
Analysis of Explainers of Black Box Deep Neural Networks for Computer
  Vision: A Survey
Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey
Vanessa Buhrmester
David Münch
Michael Arens
MLAU
FaML
XAI
AAML
21
354
0
27 Nov 2019
"How do I fool you?": Manipulating User Trust via Misleading Black Box
  Explanations
"How do I fool you?": Manipulating User Trust via Misleading Black Box Explanations
Himabindu Lakkaraju
Osbert Bastani
6
249
0
15 Nov 2019
DeepView: Visualizing Classification Boundaries of Deep Neural Networks
  as Scatter Plots Using Discriminative Dimensionality Reduction
DeepView: Visualizing Classification Boundaries of Deep Neural Networks as Scatter Plots Using Discriminative Dimensionality Reduction
Alexander Schulz
Fabian Hinder
Barbara Hammer
FAtt
11
2
0
19 Sep 2019
Benchmarking Attribution Methods with Relative Feature Importance
Benchmarking Attribution Methods with Relative Feature Importance
Mengjiao Yang
Been Kim
FAtt
XAI
21
140
0
23 Jul 2019
Evaluating Explanation Without Ground Truth in Interpretable Machine
  Learning
Evaluating Explanation Without Ground Truth in Interpretable Machine Learning
Fan Yang
Mengnan Du
Xia Hu
XAI
ELM
27
66
0
16 Jul 2019
Generating User-friendly Explanations for Loan Denials using GANs
Generating User-friendly Explanations for Loan Denials using GANs
Ramya Srinivasan
Ajay Chander
Pouya Pezeshkpour
FaML
7
15
0
24 Jun 2019
What Clinicians Want: Contextualizing Explainable Machine Learning for
  Clinical End Use
What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use
S. Tonekaboni
Shalmali Joshi
M. Mccradden
Anna Goldenberg
28
382
0
13 May 2019
VINE: Visualizing Statistical Interactions in Black Box Models
VINE: Visualizing Statistical Interactions in Black Box Models
M. Britton
FAtt
17
21
0
01 Apr 2019
Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of
  Key Ideas and Publications, and Bibliography for Explainable AI
Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of Key Ideas and Publications, and Bibliography for Explainable AI
Shane T. Mueller
R. Hoffman
W. Clancey
Abigail Emrey
Gary Klein
XAI
18
285
0
05 Feb 2019
Explaining Models: An Empirical Study of How Explanations Impact
  Fairness Judgment
Explaining Models: An Empirical Study of How Explanations Impact Fairness Judgment
Jonathan Dodge
Q. V. Liao
Yunfeng Zhang
Rachel K. E. Bellamy
Casey Dugan
FaML
16
124
0
23 Jan 2019
Personalized explanation in machine learning: A conceptualization
Personalized explanation in machine learning: A conceptualization
J. Schneider
J. Handali
XAI
FAtt
22
17
0
03 Jan 2019
A Multidisciplinary Survey and Framework for Design and Evaluation of
  Explainable AI Systems
A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems
Sina Mohseni
Niloofar Zarei
Eric D. Ragan
28
102
0
28 Nov 2018
Towards Explainable Deep Learning for Credit Lending: A Case Study
Towards Explainable Deep Learning for Credit Lending: A Case Study
C. Modarres
Mark Ibrahim
Melissa Louie
John Paisley
FaML
143
20
0
15 Nov 2018
Deep Weighted Averaging Classifiers
Deep Weighted Averaging Classifiers
Dallas Card
Michael J.Q. Zhang
Hao Tang
12
41
0
06 Nov 2018
Previous
123
Next