19
0

Mind the Language Gap: Automated and Augmented Evaluation of Bias in LLMs for High- and Low-Resource Languages

Abstract

Large Language Models (LLMs) have exhibited impressive natural language processing capabilities but often perpetuate social biases inherent in their training data. To address this, we introduce MultiLingual Augmented Bias Testing (MLA-BiTe), a framework that improves prior bias evaluation methods by enabling systematic multilingual bias testing. MLA-BiTe leverages automated translation and paraphrasing techniques to support comprehensive assessments across diverse linguistic settings. In this study, we evaluate the effectiveness of MLA-BiTe by testing four state-of-the-art LLMs in six languages -- including two low-resource languages -- focusing on seven sensitive categories of discrimination.

View on arXiv
@article{buscemi2025_2504.18560,
  title={ Mind the Language Gap: Automated and Augmented Evaluation of Bias in LLMs for High- and Low-Resource Languages },
  author={ Alessio Buscemi and Cédric Lothritz and Sergio Morales and Marcos Gomez-Vazquez and Robert Clarisó and Jordi Cabot and German Castignani },
  journal={arXiv preprint arXiv:2504.18560},
  year={ 2025 }
}
Comments on this paper