37
0

Rewarding Curse: Analyze and Mitigate Reward Modeling Issues for LLM Reasoning

Abstract

Chain-of-thought (CoT) prompting demonstrates varying performance under different reasoning tasks. Previous work attempts to evaluate it but falls short in providing an in-depth analysis of patterns that influence the CoT. In this paper, we study the CoT performance from the perspective of effectiveness and faithfulness. For the former, we identify key factors that influence CoT effectiveness on performance improvement, including problem difficulty, information gain, and information flow. For the latter, we interpret the unfaithful CoT issue by conducting a joint analysis of the information interaction among the question, CoT, and answer. The result demonstrates that, when the LLM predicts answers, it can recall correct information missing in the CoT from the question, leading to the problem. Finally, we propose a novel algorithm to mitigate this issue, in which we recall extra information from the question to enhance the CoT generation and evaluate CoTs based on their information gain. Extensive experiments demonstrate that our approach enhances both the faithfulness and effectiveness of CoT.

View on arXiv
@article{li2025_2503.05188,
  title={ Rewarding Curse: Analyze and Mitigate Reward Modeling Issues for LLM Reasoning },
  author={ Jiachun Li and Pengfei Cao and Yubo Chen and Jiexin Xu and Huaijun Li and Xiaojian Jiang and Kang Liu and Jun Zhao },
  journal={arXiv preprint arXiv:2503.05188},
  year={ 2025 }
}
Comments on this paper