13
1

debiaSAE: Benchmarking and Mitigating Vision-Language Model Bias

Abstract

As Vision Language Models (VLMs) gain widespread use, their fairness remains under-explored. In this paper, we analyze demographic biases across five models and six datasets. We find that portrait datasets like UTKFace and CelebA are the best tools for bias detection, finding gaps in performance and fairness for both LLaVa and CLIP models. Scene-based datasets like PATA and VLStereoSet fail to be useful benchmarks for bias due to their text prompts allowing the model to guess the answer without a picture. As for pronoun-based datasets like VisoGender, we receive mixed signals as only some subsets of the data are useful in providing insights. To alleviate these two problems, we introduce a more rigorous evaluation dataset and a debiasing method based on Sparse Autoencoders to help reduce bias in models. We find that our data set generates more meaningful errors than the previous data sets. Furthermore, our debiasing method improves fairness, gaining 5-15 points in performance over the baseline. This study displays the problems with the current benchmarks for measuring demographic bias in Vision Language Models and introduces both a more effective dataset for measuring bias and a novel and interpretable debiasing method based on Sparse Autoencoders.

View on arXiv
@article{sasse2025_2410.13146,
  title={ debiaSAE: Benchmarking and Mitigating Vision-Language Model Bias },
  author={ Kuleen Sasse and Shan Chen and Jackson Pond and Danielle Bitterman and John Osborne },
  journal={arXiv preprint arXiv:2410.13146},
  year={ 2025 }
}
Comments on this paper