ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.20067
16
1

Evaluating the Influences of Explanation Style on Human-AI Reliance

26 October 2024
Emma Casolin
Flora D. Salim
Ben Newell
ArXivPDFHTML
Abstract

Explainable AI (XAI) aims to support appropriate human-AI reliance by increasing the interpretability of complex model decisions. Despite the proliferation of proposed methods, there is mixed evidence surrounding the effects of different styles of XAI explanations on human-AI reliance. Interpreting these conflicting findings requires an understanding of the individual and combined qualities of different explanation styles that influence appropriate and inappropriate human-AI reliance, and the role of interpretability in this interaction. In this study, we investigate the influences of feature-based, example-based, and combined feature- and example-based XAI methods on human-AI reliance through a two-part experimental study with 274 participants comparing these explanation style conditions. Our findings suggest differences between feature-based and example-based explanation styles beyond interpretability that affect human-AI reliance patterns across differences in individual performance and task complexity. Our work highlights the importance of adapting explanations to their specific users and context over maximising broad interpretability.

View on arXiv
Comments on this paper