ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.14024
22
17

LLM Critics Help Catch Bugs in Mathematics: Towards a Better Mathematical Verifier with Natural Language Feedback

20 June 2024
Bofei Gao
Zefan Cai
Runxin Xu
Peiyi Wang
Ce Zheng
Runji Lin
K. Lu
Dayiheng Liu
Chang Zhou
Wen Xiao
Junjie Hu
Tianyu Liu
Baobao Chang
    LRM
ArXivPDFHTML
Abstract

Mathematical verfier achieves success in mathematical reasoning tasks by validating the correctness of solutions. However, existing verifiers are trained with binary classification labels, which are not informative enough for the model to accurately assess the solutions. To mitigate the aforementioned insufficiency of binary labels, we introduce step-wise natural language feedbacks as rationale labels (i.e., the correctness of the current step and the explanations). In this paper, we propose \textbf{Math-Minos}, a natural language feedback enhanced verifier by constructing automatically-generated training data and a two-stage training paradigm for effective training and efficient inference. Our experiments reveal that a small set (30k) of natural language feedbacks can significantly boost the performance of the verifier by the accuracy of 1.6\% (86.6\% →\rightarrow→ 88.2\%) on GSM8K and 0.8\% (37.8\% →\rightarrow→ 38.6\%) on MATH. We have released our code and data for further exploration.

View on arXiv
Comments on this paper