Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2001.08298
Cited By
Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems
22 January 2020
Zana Buçinca
Phoebe Lin
Krzysztof Z. Gajos
Elena L. Glassman
ELM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems"
8 / 58 papers shown
Title
Outcome-Explorer: A Causality Guided Interactive Visual Interface for Interpretable Algorithmic Decision Making
Md. Naimul Hoque
Klaus Mueller
CML
59
30
0
03 Jan 2021
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors
Yi-Shan Lin
Wen-Chuan Lee
Z. Berkay Celik
XAI
34
93
0
22 Sep 2020
Play MNIST For Me! User Studies on the Effects of Post-Hoc, Example-Based Explanations & Error Rates on Debugging a Deep Learning, Black-Box Classifier
Courtney Ford
Eoin M. Kenny
Mark T. Keane
23
6
0
10 Sep 2020
Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance
Gagan Bansal
Tongshuang Wu
Joyce Zhou
Raymond Fok
Besmira Nushi
Ece Kamar
Marco Tulio Ribeiro
Daniel S. Weld
42
584
0
26 Jun 2020
Does Explainable Artificial Intelligence Improve Human Decision-Making?
Y. Alufaisan
L. Marusich
J. Bakdash
Yan Zhou
Murat Kantarcioglu
XAI
22
94
0
19 Jun 2020
Explaining Explanations: Axiomatic Feature Interactions for Deep Networks
Joseph D. Janizek
Pascal Sturmfels
Su-In Lee
FAtt
35
143
0
10 Feb 2020
Leveraging Latent Features for Local Explanations
Ronny Luss
Pin-Yu Chen
Amit Dhurandhar
P. Sattigeri
Yunfeng Zhang
Karthikeyan Shanmugam
Chun-Chen Tu
FAtt
54
37
0
29 May 2019
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
257
3,698
0
28 Feb 2017
Previous
1
2