25
0

From Mathematical Reasoning to Code: Generalization of Process Reward Models in Test-Time Scaling

Main:9 Pages
8 Figures
Bibliography:1 Pages
3 Tables
Appendix:4 Pages
Abstract

Recent advancements in improving the reasoning capabilities of Large Language Models have underscored the efficacy of Process Reward Models (PRMs) in addressing intermediate errors through structured feedback mechanisms. This study analyzes PRMs from multiple perspectives, including training methodologies, scalability, and generalization capabilities. We investigate the interplay between pre-training and reward model training FLOPs to assess their influence on PRM efficiency and accuracy in complex reasoning tasks. Our analysis reveals a pattern of diminishing returns in performance with increasing PRM scale, highlighting the importance of balancing model size and computational cost. Furthermore, the diversity of training datasets significantly impacts PRM performance, emphasizing the importance of diverse data to enhance both accuracy and efficiency. We further examine test-time scaling strategies, identifying Monte Carlo Tree Search as the most effective method when computational resources are abundant, while Best-of-N Sampling serves as a practical alternative under resource-limited conditions. Notably, our findings indicate that PRMs trained on mathematical datasets exhibit performance comparable to those tailored for code generation, suggesting robust cross-domain generalization. Employing a gradient-based metric, we observe that PRMs exhibit a preference for selecting responses with similar underlying patterns, further informing their optimization.

View on arXiv
@article{chen2025_2506.00027,
  title={ From Mathematical Reasoning to Code: Generalization of Process Reward Models in Test-Time Scaling },
  author={ Zhengyu Chen and Yudong Wang and Teng Xiao and Ruochen Zhou and Xuesheng Yang and Wei Wang and Zhifang Sui and Jingang Wang },
  journal={arXiv preprint arXiv:2506.00027},
  year={ 2025 }
}
Comments on this paper