73

Evaluating an evidence-guided reinforcement learning framework in aligning light-parameter large language models with decision-making cognition in psychiatric clinical reasoning

Xinxin Lin
Guangxin Dai
Yi Zhong
Xiang Li
Xue Xiao
Yixin Zhang
Zhengdong Wu
Yongbo Zheng
Runchuan Zhu
Ming Zhao
Huizi Yu
Shuo Wu
Jun Zhao
Lingming Hu
Yumei Wang
Ping Yin
Joey W.Y. Chan
Ngan Yin Chan
Sijing Chen
Yun Kwok Wing
Lin Lu
Xin Ma
Lizhou Fan
Main:13 Pages
8 Figures
Bibliography:2 Pages
Appendix:6 Pages
Abstract

Large language models (LLMs) hold transformative potential for medical decision support yet their application in psychiatry remains constrained by hallucinations and superficial reasoning. This limitation is particularly acute in light-parameter LLMs which are essential for privacy-preserving and efficient clinical deployment. Existing training paradigms prioritize linguistic fluency over structured clinical logic and result in a fundamental misalignment with professional diagnostic cognition. Here we introduce ClinMPO, a reinforcement learning framework designed to align the internal reasoning of LLMs with professional psychiatric practice. The framework employs a specialized reward model trained independently on a dataset derived from 4,474 psychiatry journal articles and structured according to evidence-based medicine principles. We evaluated ClinMPO on a unseen subset of the benchmark designed to isolate reasoning capabilities from rote memorization. This test set comprises items where leading large-parameter LLMs consistently fail. We compared the ClinMPO-aligned light LLM performance against a cohort of 300 medical students. The ClinMPO-tuned Qwen3-8B model achieved a diagnostic accuracy of 31.4% and surpassed the human benchmark of 30.8% on these complex cases. These results demonstrate that medical evidence-guided optimization enables light-parameter LLMs to master complex reasoning tasks. Our findings suggest that explicit cognitive alignment offers a scalable pathway to reliable and safe psychiatric decision support.

View on arXiv
Comments on this paper