18
0

Supervised Optimism Correction: Be Confident When LLMs Are Sure

Abstract

In this work, we establish a novel theoretical connection between supervised fine-tuning and offline reinforcement learning under the token-level Markov decision process, revealing that large language models indeed learn an implicit QQ-function for inference. Through this theoretical lens, we demonstrate that the widely used beam search method suffers from unacceptable over-optimism, where inference errors are inevitably amplified due to inflated QQ-value estimations of suboptimal steps. To address this limitation, we propose Supervised Optimism Correction(SOC), which introduces a simple yet effective auxiliary loss for token-level QQ-value estimations during supervised fine-tuning. Specifically, the auxiliary loss employs implicit value regularization to boost model confidence in expert-demonstrated responses, thereby suppressing over-optimism toward insufficiently supervised responses. Extensive experiments on mathematical reasoning benchmarks, including GSM8K, MATH, and GAOKAO, showcase the superiority of the proposed SOC with beam search across a series of open-source models.

View on arXiv
@article{zhang2025_2504.07527,
  title={ Supervised Optimism Correction: Be Confident When LLMs Are Sure },
  author={ Junjie Zhang and Rushuai Yang and Shunyu Liu and Ting-En Lin and Fei Huang and Yi Chen and Yongbin Li and Dacheng Tao },
  journal={arXiv preprint arXiv:2504.07527},
  year={ 2025 }
}
Comments on this paper