83
1

Think, Prune, Train, Improve: Scaling Reasoning without Scaling Models

Abstract

Large language models (LLMs) have demonstrated strong capabilities in programming and mathematical reasoning tasks, but are constrained by limited high-quality training data. Synthetic data can be leveraged to enhance fine-tuning outcomes, but several factors influence this process, including model size, synthetic data volume, pruning strategy, and number of fine-tuning rounds. We explore these axes and investigate which conditions enable model self-improvement. We introduce the Think, Prune, Train process, a scalable framework that iteratively fine-tunes models on their own reasoning traces, using ground-truth pruning to ensure high-quality training data. This approach yields improved performance: on GSM8K, Gemma2-2B achieves a Pass@1 of 57.6% (from 41.9%), Gemma2-9B reaches 82%, matching LLaMA-3.1-70B, and LLaMA-3.1-70B attains 91%, even surpassing GPT-4o, demonstrating the effectiveness of self-generated reasoning and systematic data selection for improving LLM capabilities.

View on arXiv
@article{costello2025_2504.18116,
  title={ Think, Prune, Train, Improve: Scaling Reasoning without Scaling Models },
  author={ Caia Costello and Simon Guo and Anna Goldie and Azalia Mirhoseini },
  journal={arXiv preprint arXiv:2504.18116},
  year={ 2025 }
}
Comments on this paper