Medical Large Multi-modal Models (LMMs) have demonstrated remarkable capabilities in medical data interpretation. However, these models frequently generate hallucinations contradicting source evidence, particularly due to inadequate localization reasoning. This work reveals a critical limitation in current medical LMMs: instead of analyzing relevant pathological regions, they often rely on linguistic patterns or attend to irrelevant image areas when responding to disease-related queries. To address this, we introduce HEAL-MedVQA (Hallucination Evaluation via Localization MedVQA), a comprehensive benchmark designed to evaluate LMMs' localization abilities and hallucination robustness. HEAL-MedVQA features (i) two innovative evaluation protocols to assess visual and textual shortcut learning, and (ii) a dataset of 67K VQA pairs, with doctor-annotated anatomical segmentation masks for pathological regions. To improve visual reasoning, we propose the Localize-before-Answer (LobA) framework, which trains LMMs to localize target regions of interest and self-prompt to emphasize segmented pathological areas, generating grounded and reliable answers. Experimental results demonstrate that our approach significantly outperforms state-of-the-art biomedical LMMs on the challenging HEAL-MedVQA benchmark, advancing robustness in medical VQA.
View on arXiv@article{nguyen2025_2505.00744, title={ Localizing Before Answering: A Hallucination Evaluation Benchmark for Grounded Medical Multimodal LLMs }, author={ Dung Nguyen and Minh Khoi Ho and Huy Ta and Thanh Tam Nguyen and Qi Chen and Kumar Rav and Quy Duong Dang and Satwik Ramchandre and Son Lam Phung and Zhibin Liao and Minh-Son To and Johan Verjans and Phi Le Nguyen and Vu Minh Hieu Phan }, journal={arXiv preprint arXiv:2505.00744}, year={ 2025 } }