MedM-VL: What Makes a Good Medical LVLM?

Medical image analysis is essential in modern healthcare. Deep learning has redirected research focus toward complex medical multimodal tasks, including report generation and visual question answering. Traditional task-specific models often fall short in handling these challenges. Large vision-language models (LVLMs) offer new solutions for solving such tasks. In this study, we build on the popular LLaVA framework to systematically explore model architectures and training strategies for both 2D and 3D medical LVLMs. We present extensive empirical findings and practical guidance. To support reproducibility and future research, we release a modular codebase, MedM-VL, and two pre-trained models: MedM-VL-2D for 2D medical image analysis and MedM-VL-CT-Chest for 3D CT-based applications. The code and models are available at:this https URL
View on arXiv@article{shi2025_2504.04323, title={ MedM-VL: What Makes a Good Medical LVLM? }, author={ Yiming Shi and Shaoshuai Yang and Xun Zhu and Haoyu Wang and Miao Li and Ji Wu }, journal={arXiv preprint arXiv:2504.04323}, year={ 2025 } }