BiasGuard: A Reasoning-enhanced Bias Detection Tool For Large Language Models

Identifying bias in LLM-generated content is a crucial prerequisite for ensuring fairness in LLMs. Existing methods, such as fairness classifiers and LLM-based judges, face limitations related to difficulties in understanding underlying intentions and the lack of criteria for fairness judgment. In this paper, we introduce BiasGuard, a novel bias detection tool that explicitly analyzes inputs and reasons through fairness specifications to provide accurate judgments. BiasGuard is implemented through a two-stage approach: the first stage initializes the model to explicitly reason based on fairness specifications, while the second stage leverages reinforcement learning to enhance its reasoning and judgment capabilities. Our experiments, conducted across five datasets, demonstrate that BiasGuard outperforms existing tools, improving accuracy and reducing over-fairness misjudgments. We also highlight the importance of reasoning-enhanced decision-making and provide evidence for the effectiveness of our two-stage optimization pipeline.
View on arXiv@article{fan2025_2504.21299, title={ BiasGuard: A Reasoning-enhanced Bias Detection Tool For Large Language Models }, author={ Zhiting Fan and Ruizhe Chen and Zuozhu Liu }, journal={arXiv preprint arXiv:2504.21299}, year={ 2025 } }