v1v2 (latest)
Reasoning Strategies in Large Language Models: Can They Follow, Prefer, and Optimize?
Yanjian Zhang
Guillaume Wisniewski
Nadi Tomeh
Thierry Charnois
- LRM
Main:8 Pages
9 Figures
Bibliography:2 Pages
12 Tables
Appendix:8 Pages
Abstract
Human reasoning involves different strategies, each suited to specific problems. Prior work shows that large language model (LLMs) tend to favor a single reasoning strategy, potentially limiting their effectiveness in diverse reasoning challenges. In this work, we investigate whether prompting can control LLMs reasoning strategies and assess its impact on logical problem-solving. While our experiments show that no single strategy consistently improves accuracy, performance could be enhanced if models could adaptively choose the optimal strategy. We propose methods to guide LLMs in strategy selection, highlighting new ways to refine their reasoning abilities.
View on arXivComments on this paper
