30
0

Uncertainty-Aware Large Language Models for Explainable Disease Diagnosis

Abstract

Explainable disease diagnosis, which leverages patient information (e.g., signs and symptoms) and computational models to generate probable diagnoses and reasonings, offers clear clinical values. However, when clinical notes encompass insufficient evidence for a definite diagnosis, such as the absence of definitive symptoms, diagnostic uncertainty usually arises, increasing the risk of misdiagnosis and adverse outcomes. Although explicitly identifying and explaining diagnostic uncertainties is essential for trustworthy diagnostic systems, it remains under-explored. To fill this gap, we introduce ConfiDx, an uncertainty-aware large language model (LLM) created by fine-tuning open-source LLMs with diagnostic criteria. We formalized the task and assembled richly annotated datasets that capture varying degrees of diagnostic ambiguity. Evaluating ConfiDx on real-world datasets demonstrated that it excelled in identifying diagnostic uncertainties, achieving superior diagnostic performance, and generating trustworthy explanations for diagnoses and uncertainties. To our knowledge, this is the first study to jointly address diagnostic uncertainty recognition and explanation, substantially enhancing the reliability of automatic diagnostic systems.

View on arXiv
@article{zhou2025_2505.03467,
  title={ Uncertainty-Aware Large Language Models for Explainable Disease Diagnosis },
  author={ Shuang Zhou and Jiashuo Wang and Zidu Xu and Song Wang and David Brauer and Lindsay Welton and Jacob Cogan and Yuen-Hei Chung and Lei Tian and Zaifu Zhan and Yu Hou and Mingquan Lin and Genevieve B. Melton and Rui Zhang },
  journal={arXiv preprint arXiv:2505.03467},
  year={ 2025 }
}
Comments on this paper