32
0

Exploring the Effect of Explanation Content and Format on User Comprehension and Trust in Healthcare

Abstract

AI-driven tools for healthcare are widely acknowledged as potentially beneficial to health practitioners and patients, e.g. the QCancer regression tool for cancer risk prediction. However, for these tools to be trusted, they need to be supplemented with explanations. We examine how explanations' content and format affect user comprehension and trust when explaining QCancer's predictions. Regarding content, we deploy SHAP and Occlusion-1. Regarding format, we present SHAP explanations, conventionally, as charts (SC) and Occlusion-1 explanations as charts (OC) as well as text (OT), to which their simpler nature lends itself. We conduct experiments with two sets of stakeholders: the general public (representing patients) and medical students (representing healthcare practitioners). Our experiments showed higher subjective comprehension and trust for Occlusion-1 over SHAP explanations based on content. However, when controlling for format, only OT outperformed SC, suggesting this trend is driven by preferences for text. Other findings corroborated that explanation format, rather than content, is often the critical factor.

View on arXiv
@article{rago2025_2408.17401,
  title={ Exploring the Effect of Explanation Content and Format on User Comprehension and Trust in Healthcare },
  author={ Antonio Rago and Bence Palfi and Purin Sukpanichnant and Hannibal Nabli and Kavyesh Vivek and Olga Kostopoulou and James Kinross and Francesca Toni },
  journal={arXiv preprint arXiv:2408.17401},
  year={ 2025 }
}
Comments on this paper