ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.00006
  4. Cited By
An Evaluation of the Human-Interpretability of Explanation
v1v2 (latest)

An Evaluation of the Human-Interpretability of Explanation

31 January 2019
Isaac Lage
Emily Chen
Jeffrey He
Menaka Narayanan
Been Kim
Sam Gershman
Finale Doshi-Velez
    FAttXAI
ArXiv (abs)PDFHTML

Papers citing "An Evaluation of the Human-Interpretability of Explanation"

50 / 60 papers shown
Title
On the notion of missingness for path attribution explainability methods in medical settings: Guiding the selection of medically meaningful baselines
On the notion of missingness for path attribution explainability methods in medical settings: Guiding the selection of medically meaningful baselines
Alexander Geiger
Lars Wagner
Daniel Rueckert
Dirk Wilhelm
A. Jell
OODBDLMedIm
267
0
0
20 Aug 2025
Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance
Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance
Yuchu Jiang
Jian Zhao
Yuchen Yuan
Tianle Zhang
Yao Huang
...
Ya Zhang
Shuicheng Yan
Chi Zhang
Z. He
Xuelong Li
SILM
382
2
0
12 Aug 2025
CASE: Contrastive Activation for Saliency Estimation
CASE: Contrastive Activation for Saliency Estimation
Dane Williamson
Yangfeng Ji
Matthew B. Dwyer
FAttAAML
281
0
0
08 Jun 2025
Should I Share this Translation? Evaluating Quality Feedback for User Reliance on Machine Translation
Should I Share this Translation? Evaluating Quality Feedback for User Reliance on Machine Translation
Dayeon Ki
Kevin Duh
Marine Carpuat
170
2
0
30 May 2025
See What I Mean? CUE: A Cognitive Model of Understanding Explanations
See What I Mean? CUE: A Cognitive Model of Understanding Explanations
Tobias Labarta
Nhi Hoang
Katharina Weitz
Wojciech Samek
Sebastian Lapuschkin
Leander Weber
197
0
0
09 May 2025
Reasoning Models Don't Always Say What They Think
Reasoning Models Don't Always Say What They Think
Yanda Chen
Joe Benton
Ansh Radhakrishnan
Jonathan Uesato
Carson E. Denison
...
Vlad Mikulik
Samuel R. Bowman
Jan Leike
Jared Kaplan
E. Perez
ReLMLRM
378
165
1
08 May 2025
B-cos LM: Efficiently Transforming Pre-trained Language Models for Improved Explainability
B-cos LM: Efficiently Transforming Pre-trained Language Models for Improved Explainability
Yifan Wang
Sukrut Rao
Ji-Ung Lee
Mayank Jobanputra
Vera Demberg
167
0
0
18 Feb 2025
Generative Example-Based Explanations: Bridging the Gap between Generative Modeling and Explainability
Generative Example-Based Explanations: Bridging the Gap between Generative Modeling and Explainability
Philipp Vaeth
Alexander M. Fruehwald
Benjamin Paassen
Magda Gregorova
GAN
236
1
0
28 Oct 2024
A Sim2Real Approach for Identifying Task-Relevant Properties in
  Interpretable Machine Learning
A Sim2Real Approach for Identifying Task-Relevant Properties in Interpretable Machine Learning
Eura Nofshin
Esther Brown
Brian Lim
Weiwei Pan
Finale Doshi-Velez
236
1
0
31 May 2024
Does Your Model Think Like an Engineer? Explainable AI for Bearing Fault
  Detection with Deep Learning
Does Your Model Think Like an Engineer? Explainable AI for Bearing Fault Detection with Deep Learning
Thomas Decker
Michael Lebacher
Volker Tresp
97
14
0
19 Oct 2023
Quantifying the Plausibility of Context Reliance in Neural Machine
  Translation
Quantifying the Plausibility of Context Reliance in Neural Machine TranslationInternational Conference on Learning Representations (ICLR), 2023
Gabriele Sarti
Grzegorz Chrupala
Malvina Nissim
Arianna Bisazza
242
5
0
02 Oct 2023
Explaining Groups of Instances Counterfactually for XAI: A Use Case,
  Algorithm and User Study for Group-Counterfactuals
Explaining Groups of Instances Counterfactually for XAI: A Use Case, Algorithm and User Study for Group-Counterfactuals
Greta Warren
Markt. Keane
Christophe Guéret
Eoin Delaney
125
14
0
16 Mar 2023
ASQ-IT: Interactive Explanations for Reinforcement-Learning Agents
ASQ-IT: Interactive Explanations for Reinforcement-Learning Agents
Yotam Amitai
Guy Avni
Ofra Amir
232
3
0
24 Jan 2023
Selective Explanations: Leveraging Human Input to Align Explainable AI
Selective Explanations: Leveraging Human Input to Align Explainable AI
Vivian Lai
Yiming Zhang
Chacha Chen
Q. V. Liao
Chenhao Tan
287
55
0
23 Jan 2023
Counterfactual Explanations for Misclassified Images: How Human and
  Machine Explanations Differ
Counterfactual Explanations for Misclassified Images: How Human and Machine Explanations DifferArtificial Intelligence (AI), 2022
Eoin Delaney
A. Pakrashi
Derek Greene
Markt. Keane
210
21
0
16 Dec 2022
The Influence of Explainable Artificial Intelligence: Nudging Behaviour
  or Boosting Capability?
The Influence of Explainable Artificial Intelligence: Nudging Behaviour or Boosting Capability?
Matija Franklin
TDI
149
1
0
05 Oct 2022
BAGEL: A Benchmark for Assessing Graph Neural Network Explanations
BAGEL: A Benchmark for Assessing Graph Neural Network Explanations
Mandeep Rathee
Thorben Funke
Avishek Anand
Megha Khosla
139
17
0
28 Jun 2022
OpenXAI: Towards a Transparent Evaluation of Model Explanations
OpenXAI: Towards a Transparent Evaluation of Model Explanations
Chirag Agarwal
Dan Ley
Satyapriya Krishna
Eshika Saxena
Martin Pawelczyk
Nari Johnson
Isha Puri
Marinka Zitnik
Himabindu Lakkaraju
XAI
386
167
0
22 Jun 2022
Xplique: A Deep Learning Explainability Toolbox
Xplique: A Deep Learning Explainability Toolbox
Thomas Fel
Lucas Hervier
David Vigouroux
Antonin Poché
Justin Plakoo
...
Agustin Picard
C. Nicodeme
Laurent Gardes
G. Flandin
Thomas Serre
168
40
0
09 Jun 2022
Use-Case-Grounded Simulations for Explanation Evaluation
Use-Case-Grounded Simulations for Explanation EvaluationNeural Information Processing Systems (NeurIPS), 2022
Valerie Chen
Nari Johnson
Nicholay Topin
Gregory Plumb
Ameet Talwalkar
FAttELM
174
24
0
05 Jun 2022
Fairness via Explanation Quality: Evaluating Disparities in the Quality
  of Post hoc Explanations
Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc ExplanationsAAAI/ACM Conference on AI, Ethics, and Society (AIES), 2022
Jessica Dai
Sohini Upadhyay
Ulrich Aïvodji
Stephen H. Bach
Himabindu Lakkaraju
209
61
0
15 May 2022
Rethinking Explainability as a Dialogue: A Practitioner's Perspective
Rethinking Explainability as a Dialogue: A Practitioner's Perspective
Himabindu Lakkaraju
Dylan Slack
Yuxin Chen
Chenhao Tan
Sameer Singh
LRM
197
72
0
03 Feb 2022
Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
157
1
0
30 Jan 2022
Visual Exploration of Machine Learning Model Behavior with Hierarchical
  Surrogate Rule Sets
Visual Exploration of Machine Learning Model Behavior with Hierarchical Surrogate Rule SetsIEEE Transactions on Visualization and Computer Graphics (TVCG), 2022
Jun Yuan
Brian Barr
Kyle Overton
E. Bertini
117
13
0
19 Jan 2022
More Than Words: Towards Better Quality Interpretations of Text
  Classifiers
More Than Words: Towards Better Quality Interpretations of Text Classifiers
Muhammad Bilal Zafar
Philipp Schmidt
Michele Donini
Cédric Archambeau
F. Biessmann
Sanjiv Ranjan Das
K. Kenthapadi
FAtt
180
6
0
23 Dec 2021
Towards a Science of Human-AI Decision Making: A Survey of Empirical
  Studies
Towards a Science of Human-AI Decision Making: A Survey of Empirical Studies
Vivian Lai
Chacha Chen
Q. V. Liao
Alison Smith-Renner
Chenhao Tan
229
208
0
21 Dec 2021
LoNLI: An Extensible Framework for Testing Diverse Logical Reasoning
  Capabilities for NLI
LoNLI: An Extensible Framework for Testing Diverse Logical Reasoning Capabilities for NLILanguage Resources and Evaluation (LRE), 2021
Ishan Tarunesh
Somak Aditya
Monojit Choudhury
ELMLRM
131
4
0
04 Dec 2021
Improving Users' Mental Model with Attention-directed Counterfactual
  Edits
Improving Users' Mental Model with Attention-directed Counterfactual Edits
Kamran Alipour
Arijit Ray
Xiaoyu Lin
Michael Cogswell
J. Schulze
Yi Yao
Giedrius Burachas
OOD
181
11
0
13 Oct 2021
Explaining Reward Functions to Humans for Better Human-Robot
  Collaboration
Explaining Reward Functions to Humans for Better Human-Robot Collaboration
Lindsay M. Sanneman
J. Shah
121
5
0
08 Oct 2021
An Exploration And Validation of Visual Factors in Understanding
  Classification Rule Sets
An Exploration And Validation of Visual Factors in Understanding Classification Rule Sets
Jun Yuan
O. Nov
E. Bertini
179
10
0
19 Sep 2021
Trusting RoBERTa over BERT: Insights from CheckListing the Natural
  Language Inference Task
Trusting RoBERTa over BERT: Insights from CheckListing the Natural Language Inference Task
Ishan Tarunesh
Somak Aditya
Monojit Choudhury
104
17
0
15 Jul 2021
A Review of Explainable Artificial Intelligence in Manufacturing
A Review of Explainable Artificial Intelligence in Manufacturing
G. Sofianidis
Jože M. Rožanec
Dunja Mladenić
D. Kyriazis
126
25
0
05 Jul 2021
Learnt Sparsification for Interpretable Graph Neural Networks
Learnt Sparsification for Interpretable Graph Neural Networks
Mandeep Rathee
Zijian Zhang
Thorben Funke
Megha Khosla
Avishek Anand
135
4
0
23 Jun 2021
Abstraction, Validation, and Generalization for Explainable Artificial
  Intelligence
Abstraction, Validation, and Generalization for Explainable Artificial IntelligenceApplied AI Letters (AA), 2021
Scott Cheng-Hsin Yang
Tomas Folke
Patrick Shafto
151
6
0
16 May 2021
Visualizing Rule Sets: Exploration and Validation of a Design Space
Visualizing Rule Sets: Exploration and Validation of a Design Space
Jun Yuan
O. Nov
E. Bertini
169
1
0
01 Mar 2021
Unbox the Black-box for the Medical Explainable AI via Multi-modal and
  Multi-centre Data Fusion: A Mini-Review, Two Showcases and Beyond
Unbox the Black-box for the Medical Explainable AI via Multi-modal and Multi-centre Data Fusion: A Mini-Review, Two Showcases and BeyondInformation Fusion (Inf. Fusion), 2021
Guang Yang
Qinghao Ye
Jun Xia
225
563
0
03 Feb 2021
Evaluating the Interpretability of Generative Models by Interactive
  Reconstruction
Evaluating the Interpretability of Generative Models by Interactive ReconstructionInternational Conference on Human Factors in Computing Systems (CHI), 2021
A. Ross
Nina Chen
Elisa Zhao Hang
Elena L. Glassman
Finale Doshi-Velez
269
52
0
02 Feb 2021
Explain and Predict, and then Predict Again
Explain and Predict, and then Predict AgainWeb Search and Data Mining (WSDM), 2021
Zijian Zhang
Koustav Rudra
Avishek Anand
FAtt
255
57
0
11 Jan 2021
One-shot Policy Elicitation via Semantic Reward Manipulation
One-shot Policy Elicitation via Semantic Reward Manipulation
Aaquib Tabrez
Ryan Leonard
Bradley Hayes
133
2
0
06 Jan 2021
Challenging common interpretability assumptions in feature attribution
  explanations
Challenging common interpretability assumptions in feature attribution explanations
Jonathan Dinu
Jeffrey P. Bigham
J. Z. K. Unaffiliated
216
14
0
04 Dec 2020
Evaluating and Characterizing Human Rationales
Evaluating and Characterizing Human RationalesConference on Empirical Methods in Natural Language Processing (EMNLP), 2020
Samuel Carton
Anirudh Rathore
Chenhao Tan
169
52
0
09 Oct 2020
A survey of algorithmic recourse: definitions, formulations, solutions,
  and prospects
A survey of algorithmic recourse: definitions, formulations, solutions, and prospects
Amir-Hossein Karimi
Gilles Barthe
Bernhard Schölkopf
Isabel Valera
FaML
275
182
0
08 Oct 2020
How Good is your Explanation? Algorithmic Stability Measures to Assess
  the Quality of Explanations for Deep Neural Networks
How Good is your Explanation? Algorithmic Stability Measures to Assess the Quality of Explanations for Deep Neural NetworksIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2020
Thomas Fel
David Vigouroux
Rémi Cadène
Thomas Serre
XAIFAtt
309
33
0
07 Sep 2020
The role of explainability in creating trustworthy artificial
  intelligence for health care: a comprehensive survey of the terminology,
  design choices, and evaluation strategies
The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategiesJournal of Biomedical Informatics (JBI), 2020
A. Markus
J. Kors
P. Rijnbeek
223
560
0
31 Jul 2020
The Impact of Explanations on AI Competency Prediction in VQA
The Impact of Explanations on AI Competency Prediction in VQA
Kamran Alipour
Arijit Ray
Xiaoyu Lin
J. Schulze
Yi Yao
Giedrius Burachas
180
11
0
02 Jul 2020
Does Explainable Artificial Intelligence Improve Human Decision-Making?
Does Explainable Artificial Intelligence Improve Human Decision-Making?
Y. Alufaisan
L. Marusich
J. Bakdash
Yan Zhou
Murat Kantarcioglu
XAI
202
115
0
19 Jun 2020
Misplaced Trust: Measuring the Interference of Machine Learning in Human
  Decision-Making
Misplaced Trust: Measuring the Interference of Machine Learning in Human Decision-Making
Harini Suresh
Natalie Lao
Ilaria Liccardi
74
60
0
22 May 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Explainable Deep Learning: A Field Guide for the UninitiatedJournal of Artificial Intelligence Research (JAIR), 2020
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAMLXAI
346
414
0
30 Apr 2020
Sequential Interpretability: Methods, Applications, and Future Direction
  for Understanding Deep Learning Models in the Context of Sequential Data
Sequential Interpretability: Methods, Applications, and Future Direction for Understanding Deep Learning Models in the Context of Sequential Data
B. Shickel
Parisa Rashidi
AI4TS
191
21
0
27 Apr 2020
Towards Faithfully Interpretable NLP Systems: How should we define and
  evaluate faithfulness?
Towards Faithfully Interpretable NLP Systems: How should we define and evaluate faithfulness?Annual Meeting of the Association for Computational Linguistics (ACL), 2020
Alon Jacovi
Yoav Goldberg
XAI
450
679
0
07 Apr 2020
12
Next