ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2204.10152
  4. Cited By
Features of Explainability: How users understand counterfactual and
  causal explanations for categorical and continuous features in XAI

Features of Explainability: How users understand counterfactual and causal explanations for categorical and continuous features in XAI

21 April 2022
Greta Warren
Mark T. Keane
R. Byrne
    CML
ArXivPDFHTML

Papers citing "Features of Explainability: How users understand counterfactual and causal explanations for categorical and continuous features in XAI"

13 / 13 papers shown
Title
Towards Unifying Evaluation of Counterfactual Explanations: Leveraging Large Language Models for Human-Centric Assessments
Towards Unifying Evaluation of Counterfactual Explanations: Leveraging Large Language Models for Human-Centric Assessments
M. Domnich
Julius Valja
Rasmus Moorits Veski
Giacomo Magnifico
Kadi Tulver
Eduard Barbu
Raul Vicente
LRM
ELM
40
2
0
28 Oct 2024
Cultural Bias in Explainable AI Research: A Systematic Analysis
Cultural Bias in Explainable AI Research: A Systematic Analysis
Uwe Peters
Mary Carman
21
10
0
28 Feb 2024
The Utility of "Even if..." Semifactual Explanation to Optimise Positive
  Outcomes
The Utility of "Even if..." Semifactual Explanation to Optimise Positive Outcomes
Eoin M. Kenny
Weipeng Huang
24
9
0
29 Oct 2023
T-COL: Generating Counterfactual Explanations for General User
  Preferences on Variable Machine Learning Systems
T-COL: Generating Counterfactual Explanations for General User Preferences on Variable Machine Learning Systems
Yiming Li
Daling Wang
Wenfang Wu
Shi Feng
Yifei Zhang
CML
32
1
0
28 Sep 2023
For Better or Worse: The Impact of Counterfactual Explanations'
  Directionality on User Behavior in xAI
For Better or Worse: The Impact of Counterfactual Explanations' Directionality on User Behavior in xAI
Ulrike Kuhl
André Artelt
Barbara Hammer
11
3
0
13 Jun 2023
Explaining Groups of Instances Counterfactually for XAI: A Use Case,
  Algorithm and User Study for Group-Counterfactuals
Explaining Groups of Instances Counterfactually for XAI: A Use Case, Algorithm and User Study for Group-Counterfactuals
Greta Warren
Markt. Keane
Christophe Guéret
Eoin Delaney
21
13
0
16 Mar 2023
Explaining Classifications to Non Experts: An XAI User Study of Post Hoc
  Explanations for a Classifier When People Lack Expertise
Explaining Classifications to Non Experts: An XAI User Study of Post Hoc Explanations for a Classifier When People Lack Expertise
Courtney Ford
Markt. Keane
14
11
0
19 Dec 2022
Counterfactual Explanations for Misclassified Images: How Human and
  Machine Explanations Differ
Counterfactual Explanations for Misclassified Images: How Human and Machine Explanations Differ
Eoin Delaney
A. Pakrashi
Derek Greene
Markt. Keane
21
15
0
16 Dec 2022
Can counterfactual explanations of AI systems' predictions skew lay
  users' causal intuitions about the world? If so, can we correct for that?
Can counterfactual explanations of AI systems' predictions skew lay users' causal intuitions about the world? If so, can we correct for that?
Marko Tešić
U. Hahn
CML
9
5
0
12 May 2022
Let's Go to the Alien Zoo: Introducing an Experimental Framework to
  Study Usability of Counterfactual Explanations for Machine Learning
Let's Go to the Alien Zoo: Introducing an Experimental Framework to Study Usability of Counterfactual Explanations for Machine Learning
Ulrike Kuhl
André Artelt
Barbara Hammer
27
17
0
06 May 2022
Situated Conditional Reasoning
Situated Conditional Reasoning
Giovanni Casini
T. Meyer
I. Varzinczak
19
2
0
03 Sep 2021
Counterfactual Explanations and Algorithmic Recourses for Machine
  Learning: A Review
Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A Review
Sahil Verma
Varich Boonsanong
Minh Hoang
Keegan E. Hines
John P. Dickerson
Chirag Shah
CML
24
106
0
20 Oct 2020
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
225
3,681
0
28 Feb 2017
1