13
0

Addressing Bias in LLMs: Strategies and Application to Fair AI-based Recruitment

Main:8 Pages
3 Figures
Bibliography:3 Pages
2 Tables
Abstract

The use of language technologies in high-stake settings is increasing in recent years, mostly motivated by the success of Large Language Models (LLMs). However, despite the great performance of LLMs, they are are susceptible to ethical concerns, such as demographic biases, accountability, or privacy. This work seeks to analyze the capacity of Transformers-based systems to learn demographic biases present in the data, using a case study on AI-based automated recruitment. We propose a privacy-enhancing framework to reduce gender information from the learning pipeline as a way to mitigate biased behaviors in the final tools. Our experiments analyze the influence of data biases on systems built on two different LLMs, and how the proposed framework effectively prevents trained systems from reproducing the bias in the data.

View on arXiv
@article{peña2025_2506.11880,
  title={ Addressing Bias in LLMs: Strategies and Application to Fair AI-based Recruitment },
  author={ Alejandro Peña and Julian Fierrez and Aythami Morales and Gonzalo Mancera and Miguel Lopez and Ruben Tolosana },
  journal={arXiv preprint arXiv:2506.11880},
  year={ 2025 }
}
Comments on this paper