10

UCPO: Uncertainty-Aware Policy Optimization

Xianzhou Zeng
Jing Huang
Chunmei Xie
Gongrui Nan
Siye Chen
Mengyu Lu
Weiqi Xiong
Qixuan Zhou
Junhao Zhang
Qiang Zhu
Yadong Li
Xingzhong Xu
Main:8 Pages
13 Figures
Bibliography:2 Pages
4 Tables
Appendix:4 Pages
Abstract

The key to building trustworthy Large Language Models (LLMs) lies in endowing them with inherent uncertainty expression capabilities to mitigate the hallucinations that restrict their high-stakes applications. However, existing RL paradigms such as GRPO often suffer from Advantage Bias due to binary decision spaces and static uncertainty rewards, inducing either excessive conservatism or overconfidence. To tackle this challenge, this paper unveils the root causes of reward hacking and overconfidence in current RL paradigms incorporating uncertainty-based rewards, based on which we propose the UnCertainty-Aware Policy Optimization (UCPO) framework. UCPO employs Ternary Advantage Decoupling to separate and independently normalize deterministic and uncertain rollouts, thereby eliminating advantage bias. Furthermore, a Dynamic Uncertainty Reward Adjustment mechanism is introduced to calibrate uncertainty weights in real-time according to model evolution and instance difficulty. Experimental results in mathematical reasoning and general tasks demonstrate that UCPO effectively resolves the reward imbalance, significantly improving the reliability and calibration of the model beyond their knowledge boundaries.

View on arXiv
Comments on this paper