ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.17498
26
0

Improving Value-based Process Verifier via Structural Prior Injection

21 February 2025
Zetian Sun
Dongfang Li
Baotian Hu
Jun Yu
Min-Ling Zhang
ArXivPDFHTML
Abstract

In the Large Language Model(LLM) reasoning scenario, people often estimate state value via Monte Carlo sampling. Though Monte Carlo estimation is an elegant method with less inductive bias, noise and errors are inevitably introduced due to the limited sampling. To handle the problem, we inject the structural prior into the value representation and transfer the scalar value into the expectation of a pre-defined categorical distribution, representing the noise and errors from a distribution perspective. Specifically, by treating the result of Monte Carlo sampling as a single sample from the prior ground-truth Binomial distribution, we quantify the sampling error as the mismatch between posterior estimated distribution and ground-truth distribution, which is thus optimized via distribution selection optimization. We test the performance of value-based process verifiers on Best-of-N task and Beam search task. Compared with the scalar value representation, we show that reasonable structural prior injection induced by different objective functions or optimization methods can improve the performance of value-based process verifiers for about 1∼\sim∼2 points at little-to-no cost. We also show that under different structural prior, the verifiers' performances vary greatly despite having the same optimal solution, indicating the importance of reasonable structural prior injection.

View on arXiv
@article{sun2025_2502.17498,
  title={ Improving Value-based Process Verifier via Structural Prior Injection },
  author={ Zetian Sun and Dongfang Li and Baotian Hu and Jun Yu and Min Zhang },
  journal={arXiv preprint arXiv:2502.17498},
  year={ 2025 }
}
Comments on this paper