Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1902.00006
Cited By
An Evaluation of the Human-Interpretability of Explanation
31 January 2019
Isaac Lage
Emily Chen
Jeffrey He
Menaka Narayanan
Been Kim
Sam Gershman
Finale Doshi-Velez
FAtt
XAI
Re-assign community
ArXiv
PDF
HTML
Papers citing
"An Evaluation of the Human-Interpretability of Explanation"
23 / 23 papers shown
Title
Reasoning Models Don't Always Say What They Think
Yanda Chen
Joe Benton
Ansh Radhakrishnan
Jonathan Uesato
Carson E. Denison
...
Vlad Mikulik
Samuel R. Bowman
Jan Leike
Jared Kaplan
E. Perez
ReLM
LRM
68
15
1
08 May 2025
What Sketch Explainability Really Means for Downstream Tasks
Hmrishav Bandyopadhyay
Pinaki Nath Chowdhury
A. Bhunia
Aneeshan Sain
Tao Xiang
Yi-Zhe Song
30
4
0
14 Mar 2024
A Review on Explainable Artificial Intelligence for Healthcare: Why, How, and When?
M. Rubaiyat
Hossain Mondal
Prajoy Podder
26
56
0
10 Apr 2023
ASQ-IT: Interactive Explanations for Reinforcement-Learning Agents
Yotam Amitai
Guy Avni
Ofra Amir
45
3
0
24 Jan 2023
Counterfactual Explanations for Misclassified Images: How Human and Machine Explanations Differ
Eoin Delaney
A. Pakrashi
Derek Greene
Markt. Keane
35
16
0
16 Dec 2022
The Influence of Explainable Artificial Intelligence: Nudging Behaviour or Boosting Capability?
Matija Franklin
TDI
23
1
0
05 Oct 2022
BAGEL: A Benchmark for Assessing Graph Neural Network Explanations
Mandeep Rathee
Thorben Funke
Avishek Anand
Megha Khosla
44
15
0
28 Jun 2022
Use-Case-Grounded Simulations for Explanation Evaluation
Valerie Chen
Nari Johnson
Nicholay Topin
Gregory Plumb
Ameet Talwalkar
FAtt
ELM
22
24
0
05 Jun 2022
Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
19
1
0
30 Jan 2022
Towards a Science of Human-AI Decision Making: A Survey of Empirical Studies
Vivian Lai
Chacha Chen
Q. V. Liao
Alison Smith-Renner
Chenhao Tan
33
186
0
21 Dec 2021
Explaining Reward Functions to Humans for Better Human-Robot Collaboration
Lindsay M. Sanneman
J. Shah
13
5
0
08 Oct 2021
An Exploration And Validation of Visual Factors in Understanding Classification Rule Sets
Jun Yuan
O. Nov
E. Bertini
20
10
0
19 Sep 2021
Trusting RoBERTa over BERT: Insights from CheckListing the Natural Language Inference Task
Ishan Tarunesh
Somak Aditya
Monojit Choudhury
15
17
0
15 Jul 2021
The Impact of Explanations on AI Competency Prediction in VQA
Kamran Alipour
Arijit Ray
Xiaoyu Lin
J. Schulze
Yi Yao
Giedrius Burachas
27
9
0
02 Jul 2020
Does Explainable Artificial Intelligence Improve Human Decision-Making?
Y. Alufaisan
L. Marusich
J. Bakdash
Yan Zhou
Murat Kantarcioglu
XAI
22
94
0
19 Jun 2020
Misplaced Trust: Measuring the Interference of Machine Learning in Human Decision-Making
Harini Suresh
Natalie Lao
Ilaria Liccardi
16
49
0
22 May 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
41
371
0
30 Apr 2020
Sequential Interpretability: Methods, Applications, and Future Direction for Understanding Deep Learning Models in the Context of Sequential Data
B. Shickel
Parisa Rashidi
AI4TS
33
17
0
27 Apr 2020
Towards Faithfully Interpretable NLP Systems: How should we define and evaluate faithfulness?
Alon Jacovi
Yoav Goldberg
XAI
28
567
0
07 Apr 2020
The Pragmatic Turn in Explainable Artificial Intelligence (XAI)
Andrés Páez
27
190
0
22 Feb 2020
Algorithmic Recourse: from Counterfactual Explanations to Interventions
Amir-Hossein Karimi
Bernhard Schölkopf
Isabel Valera
CML
24
337
0
14 Feb 2020
Towards Explainable Artificial Intelligence
Wojciech Samek
K. Müller
XAI
32
436
0
26 Sep 2019
A Human-Grounded Evaluation of SHAP for Alert Processing
Hilde J. P. Weerts
Werner van Ipenburg
Mykola Pechenizkiy
FAtt
11
70
0
07 Jul 2019
1