44
0

Between Underthinking and Overthinking: An Empirical Study of Reasoning Length and correctness in LLMs

Abstract

Large language models (LLMs) are increasingly optimized for long reasoning, under the assumption that more reasoning leads to better performance. However, emerging evidence suggests that longer responses can sometimes degrade accuracy rather than improve it. In this paper, we conduct a systematic empirical study of the relationship between reasoning length and answer correctness. We find that LLMs tend to overthink simple problems, generating unnecessarily long outputs, and underthink harder ones, failing to extend their reasoning when it is most needed. This indicates that models might misjudge problem difficulty and fail to calibrate their response length appropriately. Furthermore, we investigate the effects of length reduction with a preference optimization algorithm when simply preferring the shorter responses regardless of answer correctness. Experiments show that the generation length can be significantly reduced while maintaining acceptable accuracy. Our findings highlight generation length as a meaningful signal for reasoning behavior and motivate further exploration into LLMs' self-awareness in reasoning length adaptation.

View on arXiv
@article{su2025_2505.00127,
  title={ Between Underthinking and Overthinking: An Empirical Study of Reasoning Length and correctness in LLMs },
  author={ Jinyan Su and Jennifer Healey and Preslav Nakov and Claire Cardie },
  journal={arXiv preprint arXiv:2505.00127},
  year={ 2025 }
}
Comments on this paper