DreamPRM-1.5: Unlocking the Potential of Each Instance for Multimodal Process Reward Model Training
- OffRL
Main:12 Pages
28 Figures
Bibliography:3 Pages
15 Tables
Appendix:13 Pages
Abstract
Training multimodal process reward models (PRMs) is challenged by distribution shifts and noisy data. We introduce DreamPRM-1.5, an instance-reweighted framework that adaptively adjusts the importance of each training example via bi-level optimization. We design two complementary strategies: Instance Table, effective for smaller datasets, and Instance Net, scalable to larger ones. Integrated into test-time scaling, DreamPRM-1.5 achieves 84.6 accuracy on the MMMU benchmark, surpassing GPT-5.
View on arXivComments on this paper
