Uncovering Bias in Large Vision-Language Models at Scale with Counterfactuals

With the advent of Large Language Models (LLMs) possessing increasingly impressive capabilities, a number of Large Vision-Language Models (LVLMs) have been proposed to augment LLMs with visual inputs. Such models condition generated text on both an input image and a text prompt, enabling a variety of use cases such as visual question answering and multimodal chat. While prior studies have examined the social biases contained in text generated by LLMs, this topic has been relatively unexplored in LVLMs. Examining social biases in LVLMs is particularly challenging due to the confounding contributions of bias induced by information contained across the text and visual modalities. To address this challenging problem, we conduct a large-scale study of text generated by different LVLMs under counterfactual changes to input images, producing over 57 million responses from popular models. Our multi-dimensional bias evaluation framework reveals that social attributes such as perceived race, gender, and physical characteristics depicted in images can significantly influence the generation of toxic content, competency-associated words, harmful stereotypes, and numerical ratings of individuals.
View on arXiv@article{howard2025_2405.20152, title={ Uncovering Bias in Large Vision-Language Models at Scale with Counterfactuals }, author={ Phillip Howard and Kathleen C. Fraser and Anahita Bhiwandiwalla and Svetlana Kiritchenko }, journal={arXiv preprint arXiv:2405.20152}, year={ 2025 } }