67
0

Evaluating Social Biases in LLM Reasoning

Abstract

In the recent development of AI reasoning, large language models (LLMs) are trained to automatically generate chain-of-thought reasoning steps, which have demonstrated compelling performance on math and coding tasks. However, when bias is mixed within the reasoning process to form strong logical arguments, it could cause even more harmful results and further induce hallucinations. In this paper, we have evaluated the 8B and 32B variants of DeepSeek-R1 against their instruction tuned counterparts on the BBQ dataset, and investigated the bias that is elicited out and being amplified through reasoning steps. To the best of our knowledge, this empirical study is the first to assess bias issues in LLM reasoning.

View on arXiv
@article{wu2025_2502.15361,
  title={ Evaluating Social Biases in LLM Reasoning },
  author={ Xuyang Wu and Jinming Nian and Zhiqiang Tao and Yi Fang },
  journal={arXiv preprint arXiv:2502.15361},
  year={ 2025 }
}
Comments on this paper