Aligning Large Language Models with Healthcare Stakeholders: A Pathway to Trustworthy AI Integration

The wide exploration of large language models (LLMs) raises the awareness of alignment between healthcare stakeholder preferences and model outputs. This alignment becomes a crucial foundation to empower the healthcare workflow effectively, safely, and responsibly. Yet the varying behaviors of LLMs may not always match with healthcare stakeholders' knowledge, demands, and values. To enable a human-AI alignment, healthcare stakeholders will need to perform essential roles in guiding and enhancing the performance of LLMs. Human professionals must participate in the entire life cycle of adopting LLM in healthcare, including training data curation, model training, and inference. In this review, we discuss the approaches, tools, and applications of alignments between healthcare stakeholders and LLMs. We demonstrate that LLMs can better follow human values by properly enhancing healthcare knowledge integration, task understanding, and human guidance. We provide outlooks on enhancing the alignment between humans and LLMs to build trustworthy real-world healthcare applications.
View on arXiv@article{ding2025_2505.02848, title={ Aligning Large Language Models with Healthcare Stakeholders: A Pathway to Trustworthy AI Integration }, author={ Kexin Ding and Mu Zhou and Akshay Chaudhari and Shaoting Zhang and Dimitris N. Metaxas }, journal={arXiv preprint arXiv:2505.02848}, year={ 2025 } }