The advent of Vision-Language Models (VLMs) in medical image analysis has the potential to help process multimodal inputs and increase performance over traditional inference methods. However, when considering the domain in which these models will be implemented, fairness and robustness are important to ensure the model stays true for any patient. In this paper, we introduce a framework for ensuring robustness and fairness of VLM models. This framework modifies the loss function at training by identifying and adjusting faulty image-text pairs through a Dynamic Bad Pair Mining algorithm and also utilizing Sinkhorn distance to ensure the loss distributions of protected groups do not deviate from the total loss. Experimental testing of our framework shows up to a 8.6\% improvement when looking at equity-scaled AUC.
View on arXiv@article{bansal2025_2505.03153, title={ Robust Fairness Vision-Language Learning for Medical Image Analysis }, author={ Sparsh Bansal and Mingyang Wu and Xin Wang and Shu Hu }, journal={arXiv preprint arXiv:2505.03153}, year={ 2025 } }