22
0

Token-Supervised Value Models for Enhancing Mathematical Problem-Solving Capabilities of Large Language Models

Jung Hyun Lee
June Yong Yang
Byeongho Heo
Dongyoon Han
Kang Min Yoo
Eunho Yang
Kang Min Yoo
Abstract

With the rapid advancement of test-time compute search strategies to improve the mathematical problem-solving capabilities of large language models (LLMs), the need for building robust verifiers has become increasingly important. However, all these inference strategies rely on existing verifiers originally designed for Best-of-N search, which makes them sub-optimal for tree search techniques at test time. During tree search, existing verifiers can only offer indirect and implicit assessments of partial solutions or under-value prospective intermediate steps, thus resulting in the premature pruning of promising intermediate steps. To overcome these limitations, we propose token-supervised value models (TVMs) - a new class of verifiers that assign each token a probability that reflects the likelihood of reaching the correct final answer. This new token-level supervision enables TVMs to directly and explicitly evaluate partial solutions, effectively distinguishing between promising and incorrect intermediate steps during tree search at test time. Experimental results demonstrate that combining tree-search-based inference strategies with TVMs significantly improves the accuracy of LLMs in mathematical problem-solving tasks, surpassing the performance of existing verifiers.

View on arXiv
@article{lee2025_2407.12863,
  title={ Token-Supervised Value Models for Enhancing Mathematical Problem-Solving Capabilities of Large Language Models },
  author={ Jung Hyun Lee and June Yong Yang and Byeongho Heo and Dongyoon Han and Kyungsu Kim and Eunho Yang and Kang Min Yoo },
  journal={arXiv preprint arXiv:2407.12863},
  year={ 2025 }
}
Comments on this paper