49
0

ConSol: Sequential Probability Ratio Testing to Find Consistent LLM Reasoning Paths Efficiently

Abstract

Recent advancements in large language models (LLMs) integrating explicit reasoning, such as OpenAI's o3-mini, DeepSeek-R1, and QWQ-32B, enable smaller models to solve complex tasks by generating intermediate reasoning steps prior to providing answers. However, this approach significantly increases computational costs, both monetarily and environmentally. The widely-used self-consistency method further exacerbates these costs by aggregating multiple reasoning paths to improve accuracy, often requiring between 40 to 64 samples per task. Although aggregation effectively reduces variance and bias, additional sampling can lead to diminishing returns when early samples yield consistent results. To address inefficiencies, we propose leveraging Sequential Probability Ratio Testing (SPRT) to dynamically terminate sampling once sufficient consistency is achieved. We calibrate SPRT parameters specifically for LLM applications, accounting for sensitivity to detect the mode of the distribution. Our experiments demonstrate that incorporating SPRT significantly enhances token efficiency, achieving comparable accuracy to self-consistency methods but at a substantially reduced computational cost. To promote transparency and facilitate reproducibility, we have made the source code and datasets used in our experiments publicly available at our GitHub repository:this https URL, or available as a PyPI package: pip install consol. We hope that this resource will support further research and encourage the development of new methods building upon our work.

View on arXiv
@article{lee2025_2503.17587,
  title={ ConSol: Sequential Probability Ratio Testing to Find Consistent LLM Reasoning Paths Efficiently },
  author={ Jaeyeon Lee and Guantong Qi and Matthew Brady Neeley and Zhandong Liu and Hyun-Hwan Jeong },
  journal={arXiv preprint arXiv:2503.17587},
  year={ 2025 }
}
Comments on this paper