32
0

Explainable AI for Clinical Outcome Prediction: A Survey of Clinician Perceptions and Preferences

Abstract

Explainable AI (XAI) techniques are necessary to help clinicians make sense of AI predictions and integrate predictions into their decision-making workflow. In this work, we conduct a survey study to understand clinician preference among different XAI techniques when they are used to interpret model predictions over text-based EHR data. We implement four XAI techniques (LIME, Attention-based span highlights, exemplar patient retrieval, and free-text rationales generated by LLMs) on an outcome prediction model that uses ICU admission notes to predict a patient's likelihood of experiencing in-hospital mortality. Using these XAI implementations, we design and conduct a survey study of 32 practicing clinicians, collecting their feedback and preferences on the four techniques. We synthesize our findings into a set of recommendations describing when each of the XAI techniques may be more appropriate, their potential limitations, as well as recommendations for improvement.

View on arXiv
@article{hou2025_2502.20478,
  title={ Explainable AI for Clinical Outcome Prediction: A Survey of Clinician Perceptions and Preferences },
  author={ Jun Hou and Lucy Lu Wang },
  journal={arXiv preprint arXiv:2502.20478},
  year={ 2025 }
}
Comments on this paper