v1v2 (latest)
Learning to Reason from Feedback at Test-Time
Annual Meeting of the Association for Computational Linguistics (ACL), 2025
- LRM
Main:8 Pages
6 Figures
Bibliography:4 Pages
6 Tables
Appendix:1 Pages
Abstract
Solving complex tasks in a single attempt is challenging for large language models (LLMs). Iterative interaction with the environment and feedback is often required to achieve success, making effective feedback utilization a critical topic. Existing approaches either struggle with length generalization or rely on naive retries without leveraging prior information. In this paper, we introduce FTTT, a novel paradigm that formulates feedback utilization as an optimization problem at test time. Additionally, we propose a learnable test-time optimizer, OpTune, to effectively exploit feedback. Experiments on two LLMs across four reasoning datasets demonstrate that FTTT and OpTune achieve superior scalability and performance.
View on arXivComments on this paper
