34
v1v2v3 (latest)

Investigating Disability Representations in Text-to-Image Models

Yang Tian
Yu Fan
Liudmila Zavolokina
Sarah Ebling
Main:15 Pages
10 Figures
Bibliography:5 Pages
9 Tables
Appendix:1 Pages
Abstract

Text-to-image generative models have made remarkable progress in producing high-quality visual content from textual descriptions, yet concerns remain about how they represent social groups. While characteristics like gender and race have received increasing attention, disability representations remain underexplored. This study investigates how people with disabilities are represented in AI-generated images by analyzing outputs from Stable Diffusion XL and DALL-E 3 using a structured prompt design. We analyze disability representations by comparing image similarities between generic disability prompts and prompts referring to specific disability categories. Moreover, we evaluate how mitigation strategies influence disability portrayals, with a focus on assessing affective framing through sentiment polarity analysis, combining both automatic and human evaluation. Our findings reveal persistent representational imbalances and highlight the need for continuous evaluation and refinement of generative models to foster more diverse and inclusive portrayals of disability.

View on arXiv
Comments on this paper