473
1

A*-Thought: Efficient Reasoning via Bidirectional Compression for Low-Resource Settings

Main:9 Pages
12 Figures
Bibliography:3 Pages
7 Tables
Appendix:9 Pages
Abstract

Large Reasoning Models (LRMs) achieve superior performance by extending the thought length. However, a lengthy thinking trajectory leads to reduced efficiency. Most of the existing methods are stuck in the assumption of overthinking and attempt to reason efficiently by compressing the Chain-of-Thought, but this often leads to performance degradation. To address this problem, we introduce A*-Thought, an efficient tree search-based unified framework designed to identify and isolate the most essential thoughts from the extensive reasoning chains produced by these models. It formulates the reasoning process of LRMs as a search tree, where each node represents a reasoning span in the giant reasoning space. By combining the A* search algorithm with a cost function specific to the reasoning path, it can efficiently compress the chain of thought and determine a reasoning path with high information density and low cost. In addition, we also propose a bidirectional importance estimation mechanism, which further refines this search process and enhances its efficiency beyond uniform sampling. Extensive experiments on several advanced math tasks show that A*-Thought effectively balances performance and efficiency over a huge search space. Specifically, A*-Thought can improve the performance of QwQ-32B by 2.39×\times with low-budget and reduce the length of the output token by nearly 50% with high-budget. The proposed method is also compatible with several other LRMs, demonstrating its generalization capability. The code can be accessed at:this https URL.

View on arXiv
@article{xu2025_2505.24550,
  title={ A*-Thought: Efficient Reasoning via Bidirectional Compression for Low-Resource Settings },
  author={ Xiaoang Xu and Shuo Wang and Xu Han and Zhenghao Liu and Huijia Wu and Peipei Li and Zhiyuan Liu and Maosong Sun and Zhaofeng He },
  journal={arXiv preprint arXiv:2505.24550},
  year={ 2025 }
}
Comments on this paper