ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.00682
  4. Cited By
How do Humans Understand Explanations from Machine Learning Systems? An
  Evaluation of the Human-Interpretability of Explanation

How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation

2 February 2018
Menaka Narayanan
Emily Chen
Jeffrey He
Been Kim
S. Gershman
Finale Doshi-Velez
    FAtt
    XAI
ArXivPDFHTML

Papers citing "How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation"

5 / 105 papers shown
Title
Do Explanations make VQA Models more Predictable to a Human?
Do Explanations make VQA Models more Predictable to a Human?
Arjun Chandrasekaran
Viraj Prabhu
Deshraj Yadav
Prithvijit Chattopadhyay
Devi Parikh
FAtt
84
97
0
29 Oct 2018
What can AI do for me: Evaluating Machine Learning Interpretations in
  Cooperative Play
What can AI do for me: Evaluating Machine Learning Interpretations in Cooperative Play
Shi Feng
Jordan L. Boyd-Graber
HAI
9
127
0
23 Oct 2018
Human-in-the-Loop Interpretability Prior
Human-in-the-Loop Interpretability Prior
Isaac Lage
A. Ross
Been Kim
S. Gershman
Finale Doshi-Velez
32
120
0
29 May 2018
Seq2Seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence Models
Seq2Seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence Models
Hendrik Strobelt
Sebastian Gehrmann
M. Behrisch
Adam Perer
Hanspeter Pfister
Alexander M. Rush
VLM
HAI
31
239
0
25 Apr 2018
A review of possible effects of cognitive biases on the interpretation
  of rule-based machine learning models
A review of possible effects of cognitive biases on the interpretation of rule-based machine learning models
Tomáš Kliegr
Š. Bahník
Johannes Furnkranz
9
100
0
09 Apr 2018
Previous
123