ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.17363
30
1

Dancing with Critiques: Enhancing LLM Reasoning with Stepwise Natural Language Self-Critique

21 March 2025
Y. Li
Jiahao Xu
Tian Liang
Xingyu Chen
Zhiwei He
Qiuzhi Liu
Rui Wang
Z. Zhang
Zhaopeng Tu
Haitao Mi
Dong Yu
    LRM
ArXivPDFHTML
Abstract

Enhancing the reasoning capabilities of large language models (LLMs), particularly for complex tasks requiring multi-step logical deductions, remains a significant challenge. Traditional inference time scaling methods utilize scalar reward signals from process reward models to evaluate candidate reasoning steps, but these scalar rewards lack the nuanced qualitative information essential for understanding and justifying each step. In this paper, we propose a novel inference-time scaling approach -- stepwise natural language self-critique (PANEL), which employs self-generated natural language critiques as feedback to guide the step-level search process. By generating rich, human-readable critiques for each candidate reasoning step, PANEL retains essential qualitative information, facilitating better-informed decision-making during inference. This approach bypasses the need for task-specific verifiers and the associated training overhead, making it broadly applicable across diverse tasks. Experimental results on challenging reasoning benchmarks, including AIME and GPQA, demonstrate that PANEL significantly enhances reasoning performance, outperforming traditional scalar reward-based methods. Our code is available atthis https URLto support and encourage future research in this promising field.

View on arXiv
@article{li2025_2503.17363,
  title={ Dancing with Critiques: Enhancing LLM Reasoning with Stepwise Natural Language Self-Critique },
  author={ Yansi Li and Jiahao Xu and Tian Liang and Xingyu Chen and Zhiwei He and Qiuzhi Liu and Rui Wang and Zhuosheng Zhang and Zhaopeng Tu and Haitao Mi and Dong Yu },
  journal={arXiv preprint arXiv:2503.17363},
  year={ 2025 }
}
Comments on this paper