Auto-Search and Refinement: An Automated Framework for Gender Bias Mitigation in Large Language Models

Pre-training large language models (LLMs) on vast text corpora enhances natural language processing capabilities but risks encoding social biases, particularly gender bias. While parameter-modification methods like fine-tuning mitigate bias, they are resource-intensive, unsuitable for closed-source models, and lack adaptability to evolving societal norms. Instruction-based approaches offer flexibility but often compromise task performance. To address these limitations, we propose , an automated and model-independent framework that employs an paradigm to adaptively generate Fairwords, which act as instructions integrated into input queries to reduce gender bias and enhance response quality. Extensive experiments demonstrate that automatically searches for and dynamically refines Fairwords, effectively mitigating gender bias while preserving task integrity and ensuring compatibility with both API-based and open-source LLMs.
View on arXiv@article{xu2025_2502.11559, title={ Auto-Search and Refinement: An Automated Framework for Gender Bias Mitigation in Large Language Models }, author={ Yue Xu and Chengyan Fu and Li Xiong and Sibei Yang and Wenjie Wang }, journal={arXiv preprint arXiv:2502.11559}, year={ 2025 } }