Assessing and Mitigating Medical Knowledge Drift and Conflicts in Large Language Models

Large Language Models (LLMs) have great potential in the field of health care, yet they face great challenges in adapting to rapidly evolving medical knowledge. This can lead to outdated or contradictory treatment suggestions. This study investigated how LLMs respond to evolving clinical guidelines, focusing on concept drift and internal inconsistencies. We developed the DriftMedQA benchmark to simulate guideline evolution and assessed the temporal reliability of various LLMs. Our evaluation of seven state-of-the-art models across 4,290 scenarios demonstrated difficulties in rejecting outdated recommendations and frequently endorsing conflicting guidance. Additionally, we explored two mitigation strategies: Retrieval-Augmented Generation and preference fine-tuning via Direct Preference Optimization. While each method improved model performance, their combination led to the most consistent and reliable results. These findings underscore the need to improve LLM robustness to temporal shifts to ensure more dependable applications in clinical practice.
View on arXiv@article{wu2025_2505.07968, title={ Assessing and Mitigating Medical Knowledge Drift and Conflicts in Large Language Models }, author={ Weiyi Wu and Xinwen Xu and Chongyang Gao and Xingjian Diao and Siting Li and Lucas A. Salas and Jiang Gui }, journal={arXiv preprint arXiv:2505.07968}, year={ 2025 } }