Phi-4-Mini-Reasoning: Exploring the Limits of Small Reasoning Language Models in Math

Chain-of-Thought (CoT) significantly enhances formal reasoning capabilities in Large Language Models (LLMs) by training them to explicitly generate intermediate reasoning steps. While LLMs readily benefit from such techniques, improving reasoning in Small Language Models (SLMs) remains challenging due to their limited model capacity. Recent work by Deepseek-R1 demonstrates that distillation from LLM-generated synthetic data can substantially improve the reasoning ability of SLM. However, the detailed modeling recipe is not disclosed. In this work, we present a systematic training recipe for SLMs that consists of four steps: (1) large-scale mid-training on diverse distilled long-CoT data, (2) supervised fine-tuning on high-quality long-CoT data, (3) Rollout DPO leveraging a carefully curated preference dataset, and (4) Reinforcement Learning (RL) with Verifiable Reward. We apply our method on Phi-4-Mini, a compact 3.8B-parameter model. The resulting Phi-4-Mini-Reasoning model exceeds, on math reasoning tasks, much larger reasoning models, e.g., outperforming DeepSeek-R1-Distill-Qwen-7B by 3.2 points and DeepSeek-R1-Distill-Llama-8B by 7.7 points on Math-500. Our results validate that a carefully designed training recipe, with large-scale high-quality CoT data, is effective to unlock strong reasoning capabilities even in resource-constrained small models.
View on arXiv@article{xu2025_2504.21233, title={ Phi-4-Mini-Reasoning: Exploring the Limits of Small Reasoning Language Models in Math }, author={ Haoran Xu and Baolin Peng and Hany Awadalla and Dongdong Chen and Yen-Chun Chen and Mei Gao and Young Jin Kim and Yunsheng Li and Liliang Ren and Yelong Shen and Shuohang Wang and Weijian Xu and Jianfeng Gao and Weizhu Chen }, journal={arXiv preprint arXiv:2504.21233}, year={ 2025 } }