Adversarial Robustness Analysis of Vision-Language Models in Medical Image Segmentation

Adversarial attacks have been fairly explored for computer vision and vision-language models. However, the avenue of adversarial attack for the vision language segmentation models (VLSMs) is still under-explored, especially for medical image analysis.Thus, we have investigated the robustness of VLSMs against adversarial attacks for 2D medical images with different modalities with radiology, photography, and endoscopy. The main idea of this project was to assess the robustness of the fine-tuned VLSMs specially in the medical domain setting to address the high risk scenario.First, we have fine-tuned pre-trained VLSMs for medical image segmentation with adapters.Then, we have employed adversarial attacks -- projected gradient descent (PGD) and fast gradient sign method (FGSM) -- on that fine-tuned model to determine its robustness against adversaries.We have reported models' performance decline to analyze the adversaries' impact.The results exhibit significant drops in the DSC and IoU scores after the introduction of these adversaries. Furthermore, we also explored universal perturbation but were not able to find for the medical images.\footnote{this https URL}
View on arXiv@article{budathoki2025_2505.02971, title={ Adversarial Robustness Analysis of Vision-Language Models in Medical Image Segmentation }, author={ Anjila Budathoki and Manish Dhakal }, journal={arXiv preprint arXiv:2505.02971}, year={ 2025 } }