Learn to Think: Bootstrapping LLM Reasoning Capability Through Graph Learning

Large Language Models (LLMs) have achieved remarkable success across various domains. However, they still face significant challenges, including high computational costs for training and limitations in solving complex reasoning problems. Although existing methods have extended the reasoning capabilities of LLMs through structured paradigms, these approaches often rely on task-specific prompts and predefined reasoning processes, which constrain their flexibility and generalizability. To address these limitations, we propose a novel framework that leverages graph learning to enable more flexible and adaptive reasoning capabilities for LLMs. Specifically, this approach models the reasoning process of a problem as a graph and employs LLM-based graph learning to guide the adaptive generation of each reasoning step. To further enhance the adaptability of the model, we introduce a Graph Neural Network (GNN) module to perform representation learning on the generated reasoning process, enabling real-time adjustments to both the model and the prompt. Experimental results demonstrate that this method significantly improves reasoning performance across multiple tasks without requiring additional training or task-specific prompt design. Code can be found inthis https URL.
View on arXiv@article{gao2025_2505.06321, title={ Learn to Think: Bootstrapping LLM Reasoning Capability Through Graph Learning }, author={ Hang Gao and Chenhao Zhang and Tie Wang and Junsuo Zhao and Fengge Wu and Changwen Zheng and Huaping Liu }, journal={arXiv preprint arXiv:2505.06321}, year={ 2025 } }