42
0

A Lightweight Large Vision-language Model for Multimodal Medical Images

Abstract

Medical Visual Question Answering (VQA) enhances clinical decision-making by enabling systems to interpret medical images and answer clinical queries. However, developing efficient, high-performance VQA models is challenging due to the complexity of medical imagery and diverse modalities. In this paper, we introduce a lightweight, multimodal VQA model integrating BiomedCLIP for image feature extraction and LLaMA-3 for text processing. Designed for medical VQA tasks, our model achieves state-of-the-art performance on the OmniMedVQA dataset. With approximately 8 billion parameters, it requires only two NVIDIA 40 GB A100 GPUs, demonstrating superior efficiency over larger models. Our results show 73.4% accuracy for open-end questions, surpassing existing models and validating its potential for real-world medical applications. Key contributions include a specialized multimodal VQA model, a resource-efficient architecture, and strong performance in answering open-ended clinical questions.

View on arXiv
@article{alsinglawi2025_2504.05575,
  title={ A Lightweight Large Vision-language Model for Multimodal Medical Images },
  author={ Belal Alsinglawi and Chris McCarthy and Sara Webb and Christopher Fluke and Navid Toosy Saidy },
  journal={arXiv preprint arXiv:2504.05575},
  year={ 2025 }
}
Comments on this paper