LFTF: Locating First and Then Fine-Tuning for Mitigating Gender Bias in Large Language Models

Nowadays, Large Language Models (LLMs) have attracted widespread attention due to their powerful performance. However, due to the unavoidable exposure to socially biased data during training, LLMs tend to exhibit social biases, particularly gender bias. To better explore and quantifying the degree of gender bias in LLMs, we propose a pair of datasets named GenBiasEval and GenHintEval, respectively. The GenBiasEval is responsible for evaluating the degree of gender bias in LLMs, accompanied by an evaluation metric named AFGB-Score (Absolutely Fair Gender Bias Score). Meanwhile, the GenHintEval is used to assess whether LLMs can provide responses consistent with prompts that contain gender hints, along with the accompanying evaluation metric UB-Score (UnBias Score). Besides, in order to mitigate gender bias in LLMs more effectively, we present the LFTF (Locating First and Then Fine-Tuning)this http URLalgorithm first ranks specific LLM blocks by their relevance to gender bias in descending order using a metric called BMI (Block Mitigating Importance Score). Based on this ranking, the block most strongly associated with gender bias is then fine-tuned using a carefully designed loss function. Numerous experiments have shown that our proposed LFTF algorithm can significantly mitigate gender bias in LLMs while maintaining their general capabilities.
View on arXiv@article{qin2025_2505.15475, title={ LFTF: Locating First and Then Fine-Tuning for Mitigating Gender Bias in Large Language Models }, author={ Zhanyue Qin and Yue Ding and Deyuan Liu and Qingbin Liu and Junxian Cai and Xi Chen and Zhiying Tu and Dianhui Chu and Cuiyun Gao and Dianbo Sui }, journal={arXiv preprint arXiv:2505.15475}, year={ 2025 } }