Benchmarking Large Language Models on Answering and Explaining Challenging Medical Questions

LLMs have demonstrated impressive performance in answering medical questions, such as achieving passing scores on medical licensing examinations. However, medical board exams or general clinical questions do not capture the complexity of realistic clinical cases. Moreover, the lack of reference explanations means we cannot easily evaluate the reasoning of model decisions, a crucial component of supporting doctors in making complex medical decisions. To address these challenges, we construct two new datasets: JAMA Clinical Challenge and Medbullets.\footnote{Datasets and code are available at \url{this https URL}.} JAMA Clinical Challenge consists of questions based on challenging clinical cases, while Medbullets comprises simulated clinical questions. Both datasets are structured as multiple-choice question-answering tasks, accompanied by expert-written explanations. We evaluate seven LLMs on the two datasets using various prompts. Experiments demonstrate that our datasets are harder than previous benchmarks. In-depth automatic and human evaluations of model-generated explanations provide insights into the promise and deficiency of LLMs for explainable medical QA.
View on arXiv@article{chen2025_2402.18060, title={ Benchmarking Large Language Models on Answering and Explaining Challenging Medical Questions }, author={ Hanjie Chen and Zhouxiang Fang and Yash Singla and Mark Dredze }, journal={arXiv preprint arXiv:2402.18060}, year={ 2025 } }