145

Adversarial Training for Process Reward Models

Main:9 Pages
4 Figures
Bibliography:5 Pages
5 Tables
Appendix:11 Pages
Abstract

Process Reward Models (PRMs) enhance reasoning ability of LLMs by providing step-level supervision. However, their widespread adoption is limited due to expensive manual step-level annotation and poor generalization of static training data to novel errors. We introduce Adversarially Trained PRMs (\texttt{APRM}), where a Generator (GG) learns to produce reasoning errors to deceive a PRM (RR), while RR concurrently learns to detect them. This interaction yields progressively harder negatives for RR, improving its robustness and generalization to novel errors without requiring manual step-level labels. Averaged across diverse mathematical reasoning benchmarks, \texttt{APRM} improves solver accuracy by +3.4+3.4 percentage points (pp) over the strongest PRM baseline. \texttt{APRM} achieves gains of +5.3+5.3 pp on out-of-distribution tasks.

View on arXiv
Comments on this paper