131
v1v2v3 (latest)

Visual Prompt Engineering for Vision Language Models in Radiology

Main:9 Pages
6 Figures
Bibliography:3 Pages
4 Tables
Appendix:4 Pages
Abstract

Medical image classification plays a crucial role in clinical decision-making, yet most models are constrained to a fixed set of predefined classes, limiting their adaptability to new conditions. Contrastive Language-Image Pretraining (CLIP) offers a promising solution by enabling zero-shot classification through multimodal large-scale pretraining. However, while CLIP effectively captures global image content, radiology requires a more localized focus on specific pathology regions to enhance both interpretability and diagnostic accuracy. To address this, we explore the potential of incorporating visual cues into zero-shot classification, embedding visual markers, such as arrows, bounding boxes, and circles, directly into radiological images to guide model attention. Evaluating across four public chest X-ray datasets, we demonstrate that visual markers improve AUROC by up to 0.185, highlighting their effectiveness in enhancing classification performance. Furthermore, attention map analysis confirms that visual cues help models focus on clinically relevant areas, leading to more interpretablethis http URLsupport further research, we use public datasets and provide our codebase and preprocessing pipeline underthis https URL, serving as a reference point for future work on localized classification in medical imaging.

View on arXiv
Comments on this paper