49
7

Process Supervision-Guided Policy Optimization for Code Generation

Abstract

Reinforcement learning (RL) with unit test feedback has enhanced large language models' (LLMs) code generation, but relies on sparse rewards provided only after complete code evaluation, limiting learning efficiency and incremental improvements. When generated code fails all unit tests, no learning signal is received, hindering progress on complex tasks. To address this, we propose a Process Reward Model (PRM) that delivers dense, line-level feedback on code correctness during generation, mimicking human code refinement and providing immediate guidance. We explore various strategies for training PRMs and integrating them into the RL framework, finding that using PRMs both as dense rewards and for value function initialization significantly boosts performance. Our experimental results also highlight the effectiveness of PRMs in enhancing RL-driven code generation, especially for long-horizon scenarios.

View on arXiv
@article{dai2025_2410.17621,
  title={ Process Supervision-Guided Policy Optimization for Code Generation },
  author={ Ning Dai and Zheng Wu and Renjie Zheng and Ziyun Wei and Wenlei Shi and Xing Jin and Guanlin Liu and Chen Dun and Liang Huang and Lin Yan },
  journal={arXiv preprint arXiv:2410.17621},
  year={ 2025 }
}
Comments on this paper