ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.20122
46
10

Self-Training Elicits Concise Reasoning in Large Language Models

27 February 2025
Tergel Munkhbat
Namgyu Ho
S. Kim
Yongjin Yang
Yujin Kim
Se-Young Yun
    ReLM
    LRM
ArXivPDFHTML
Abstract

Chain-of-thought (CoT) reasoning has enabled large language models (LLMs) to utilize additional computation through intermediate tokens to solve complex tasks. However, we posit that typical reasoning traces contain many redundant tokens, incurring extraneous inference costs. Upon examination of the output distribution of current LLMs, we find evidence on their latent ability to reason more concisely, relative to their default behavior. To elicit this capability, we propose simple fine-tuning methods which leverage self-generated concise reasoning paths obtained by best-of-N sampling and few-shot conditioning, in task-specific settings. Our combined method achieves a 30% reduction in output tokens on average, across five model families on GSM8K and MATH, while maintaining average accuracy. By exploiting the fundamental stochasticity and in-context learning capabilities of LLMs, our self-training approach robustly elicits concise reasoning on a wide range of models, including those with extensive post-training. Code is available atthis https URL

View on arXiv
@article{munkhbat2025_2502.20122,
  title={ Self-Training Elicits Concise Reasoning in Large Language Models },
  author={ Tergel Munkhbat and Namgyu Ho and Seo Hyun Kim and Yongjin Yang and Yujin Kim and Se-Young Yun },
  journal={arXiv preprint arXiv:2502.20122},
  year={ 2025 }
}
Comments on this paper