24
1

Coarse-to-Fine Process Reward Modeling for Mathematical Reasoning

Abstract

The Process Reward Model (PRM) plays a crucial role in mathematical reasoning tasks, requiring high-quality supervised process data. However, we observe that reasoning steps generated by Large Language Models (LLMs) often fail to exhibit strictly incremental information, leading to redundancy that can hinder effective reasoning. To address this issue, we propose \model, a simple yet effective coarse-to-fine strategy. Instead of focusing on the detection of redundant steps, our approach first establishes a coarse-grained window to merge adjacent reasoning steps into unified, holistic steps. The window size is then progressively reduced to extract fine-grained reasoning steps, enabling data collection at multiple granularities for training. By leveraging this hierarchical refinement process, \model mitigates redundancy while preserving essential fine-grained knowledge. Extensive experiments on two reasoning datasets across three loss criteria validate the \model's effectiveness and versatility.

View on arXiv
@article{hu2025_2501.13622,
  title={ Coarse-to-Fine Process Reward Modeling for Mathematical Reasoning },
  author={ Yulan Hu and Sheng Ouyang and Yong Liu },
  journal={arXiv preprint arXiv:2501.13622},
  year={ 2025 }
}
Comments on this paper