This technical report presents Ring-Lite-Distill, a lightweight reasoning model derived from our open-source Mixture-of-Experts (MoE) Large Language Models (LLMs) Ling-Lite. This study demonstrates that through meticulous high-quality data curation and ingenious training paradigms, the compact MoE model Ling-Lite can be further trained to achieve exceptional reasoning capabilities, while maintaining its parameter-efficient architecture with only 2.75 billion activated parameters, establishing an efficient lightweight reasoning architecture. In particular, in constructing this model, we have not merely focused on enhancing advanced reasoning capabilities, exemplified by high-difficulty mathematical problem solving, but rather aimed to develop a reasoning model with more comprehensive competency coverage. Our approach ensures coverage across reasoning tasks of varying difficulty levels while preserving generic capabilities, such as instruction following, tool use, and knowledge retention. We show that, Ring-Lite-Distill's reasoning ability reaches a level comparable to DeepSeek-R1-Distill-Qwen-7B, while its general capabilities significantly surpass those of DeepSeek-R1-Distill-Qwen-7B. The models are accessible atthis https URL
View on arXiv@article{team2025_2504.07158, title={ Holistic Capability Preservation: Towards Compact Yet Comprehensive Reasoning Models }, author={ Ling Team and Caizhi Tang and Chilin Fu and Chunwei Wu and Jia Guo and Jianwen Wang and Jingyu Hu and Liang Jiang and Meng Li and Peng Jiao and Pingping Liu and Shaomian Zheng and Shiwei Liang and Shuaicheng Li and Yalin Zhang and Yingting Wu and Yongkang Liu and Zhenyu Huang }, journal={arXiv preprint arXiv:2504.07158}, year={ 2025 } }