Adversarial Training for Process Reward Models
- LRM
Process Reward Models (PRMs) enhance reasoning ability of LLMs by providing step-level supervision. However, their widespread adoption is limited due to expensive manual step-level annotation and poor generalization of static training data to novel errors. We introduce Adversarially Trained PRMs (\texttt{APRM}), where a Generator () learns to produce reasoning errors to deceive a PRM (), while concurrently learns to detect them. This interaction yields progressively harder negatives for , improving its robustness and generalization to novel errors without requiring manual step-level labels. Averaged across diverse mathematical reasoning benchmarks, \texttt{APRM} improves solver accuracy by percentage points (pp) over the strongest PRM baseline. \texttt{APRM} achieves gains of pp on out-of-distribution tasks.
View on arXiv