ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.14333
93
1

A Survey on Feedback-based Multi-step Reasoning for Large Language Models on Mathematics

21 February 2025
Ting-Ruen Wei
Haowei Liu
Xuyang Wu
Yi Fang
    LRM
    AI4CE
    ReLM
    KELM
ArXivPDFHTML
Abstract

Recent progress in large language models (LLM) found chain-of-thought prompting strategies to improve the reasoning ability of LLMs by encouraging problem solving through multiple steps. Therefore, subsequent research aimed to integrate the multi-step reasoning process into the LLM itself through process rewards as feedback and achieved improvements over prompting strategies. Due to the cost of step-level annotation, some turn to outcome rewards as feedback. Aside from these training-based approaches, training-free techniques leverage frozen LLMs or external tools for feedback at each step to enhance the reasoning process. With the abundance of work in mathematics due to its logical nature, we present a survey of strategies utilizing feedback at the step and outcome levels to enhance multi-step math reasoning for LLMs. As multi-step reasoning emerges a crucial component in scaling LLMs, we hope to establish its foundation for easier understanding and empower further research.

View on arXiv
@article{wei2025_2502.14333,
  title={ A Survey on Feedback-based Multi-step Reasoning for Large Language Models on Mathematics },
  author={ Ting-Ruen Wei and Haowei Liu and Xuyang Wu and Yi Fang },
  journal={arXiv preprint arXiv:2502.14333},
  year={ 2025 }
}
Comments on this paper