Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1802.00682
Cited By
How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation
2 February 2018
Menaka Narayanan
Emily Chen
Jeffrey He
Been Kim
S. Gershman
Finale Doshi-Velez
FAtt
XAI
Re-assign community
ArXiv
PDF
HTML
Papers citing
"How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation"
5 / 105 papers shown
Title
Do Explanations make VQA Models more Predictable to a Human?
Arjun Chandrasekaran
Viraj Prabhu
Deshraj Yadav
Prithvijit Chattopadhyay
Devi Parikh
FAtt
84
97
0
29 Oct 2018
What can AI do for me: Evaluating Machine Learning Interpretations in Cooperative Play
Shi Feng
Jordan L. Boyd-Graber
HAI
9
127
0
23 Oct 2018
Human-in-the-Loop Interpretability Prior
Isaac Lage
A. Ross
Been Kim
S. Gershman
Finale Doshi-Velez
32
120
0
29 May 2018
Seq2Seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence Models
Hendrik Strobelt
Sebastian Gehrmann
M. Behrisch
Adam Perer
Hanspeter Pfister
Alexander M. Rush
VLM
HAI
31
239
0
25 Apr 2018
A review of possible effects of cognitive biases on the interpretation of rule-based machine learning models
Tomáš Kliegr
Š. Bahník
Johannes Furnkranz
9
100
0
09 Apr 2018
Previous
1
2
3