Stepwise Perplexity-Guided Refinement for Efficient Chain-of-Thought Reasoning in Large Language Models

Chain-of-Thought (CoT) reasoning, which breaks down complex tasks into intermediate reasoning steps, has significantly enhanced the performance of large language models (LLMs) on challenging tasks. However, the detailed reasoning process in CoT often incurs long generation times and high computational costs, partly due to the inclusion of unnecessary steps. To address this, we propose a method to identify critical reasoning steps using perplexity as a measure of their importance: a step is deemed critical if its removal causes a significant increase in perplexity. Our method enables models to focus solely on generating these critical steps. This can be achieved through two approaches: refining demonstration examples in few-shot CoT or fine-tuning the model using selected examples that include only critical steps. Comprehensive experiments validate the effectiveness of our method, which achieves a better balance between the reasoning accuracy and efficiency of CoT.
View on arXiv@article{cui2025_2502.13260, title={ Stepwise Perplexity-Guided Refinement for Efficient Chain-of-Thought Reasoning in Large Language Models }, author={ Yingqian Cui and Pengfei He and Jingying Zeng and Hui Liu and Xianfeng Tang and Zhenwei Dai and Yan Han and Chen Luo and Jing Huang and Zhen Li and Suhang Wang and Yue Xing and Jiliang Tang and Qi He }, journal={arXiv preprint arXiv:2502.13260}, year={ 2025 } }