Self-Critique Guided Iterative Reasoning for Multi-hop Question Answering

Although large language models (LLMs) have demonstrated remarkable reasoning capabilities, they still face challenges in knowledge-intensive multi-hop reasoning. Recent work explores iterative retrieval to address complex problems. However, the lack of intermediate guidance often results in inaccurate retrieval and flawed intermediate reasoning, leading to incorrect reasoning. To address these, we propose Self-Critique Guided Iterative Reasoning (SiGIR), which uses self-critique feedback to guide the iterative reasoning process. Specifically, through end-to-end training, we enable the model to iteratively address complex problems via question decomposition. Additionally, the model is able to self-evaluate its intermediate reasoning steps. During iterative reasoning, the model engages in branching exploration and employs self-evaluation to guide the selection of promising reasoning trajectories. Extensive experiments on three multi-hop reasoning datasets demonstrate the effectiveness of our proposed method, surpassing the previous SOTA by . Furthermore, our thorough analysis offers insights for future research. Our code, data, and models are available at Github:this https URL.
View on arXiv@article{chu2025_2505.19112, title={ Self-Critique Guided Iterative Reasoning for Multi-hop Question Answering }, author={ Zheng Chu and Huiming Fan and Jingchang Chen and Qianyu Wang and Mingda Yang and Jiafeng Liang and Zhongjie Wang and Hao Li and Guo Tang and Ming Liu and Bing Qin }, journal={arXiv preprint arXiv:2505.19112}, year={ 2025 } }