Uncovering Cultural Representation Disparities in Vision-Language Models

Vision-Language Models (VLMs) have demonstrated impressive capabilities across a range of tasks, yet concerns about their potential biases exist. This work investigates the extent to which prominent VLMs exhibit cultural biases by evaluating their performance on an image-based country identification task at a country level. Utilizing the geographically diverse Country211 dataset, we probe several large vision language models (VLMs) under various prompting strategies: open-ended questions, multiple-choice questions (MCQs) including challenging setups like multilingual and adversarial settings. Our analysis aims to uncover disparities in model accuracy across different countries and question formats, providing insights into how training data distribution and evaluation methodologies might influence cultural biases in VLMs. The findings highlight significant variations in performance, suggesting that while VLMs possess considerable visual understanding, they inherit biases from their pre-training data and scale that impact their ability to generalize uniformly across diverse global contexts.
View on arXiv@article{kadiyala2025_2505.14729, title={ Uncovering Cultural Representation Disparities in Vision-Language Models }, author={ Ram Mohan Rao Kadiyala and Siddhant Gupta and Jebish Purbey and Srishti Yadav and Alejandro Salamanca and Desmond Elliott }, journal={arXiv preprint arXiv:2505.14729}, year={ 2025 } }