The Quest for Efficient Reasoning: A Data-Centric Benchmark to CoT Distillation
- LRM

Data-centric distillation, including data augmentation, selection, and mixing, offers a promising path to creating smaller, more efficient student Large Language Models (LLMs) that retain strong reasoning abilities. However, there still lacks a comprehensive benchmark to systematically assess the effect of each distillation approach. This paper introduces DC-CoT, the first data-centric benchmark that investigates data manipulation in chain-of-thought (CoT) distillation from method, model and data perspectives. Utilizing various teacher models (e.g., o4-mini, Gemini-Pro, Claude-3.5) and student architectures (e.g., 3B, 7B parameters), we rigorously evaluate the impact of these data manipulations on student model performance across multiple reasoning datasets, with a focus on in-distribution (IID) and out-of-distribution (OOD) generalization, and cross-domain transfer. Our findings aim to provide actionable insights and establish best practices for optimizing CoT distillation through data-centric techniques, ultimately facilitating the development of more accessible and capable reasoning models. The dataset can be found atthis https URL, while our code is shared inthis https URL.
View on arXiv@article{zhang2025_2505.18759, title={ The Quest for Efficient Reasoning: A Data-Centric Benchmark to CoT Distillation }, author={ Ruichen Zhang and Rana Muhammad Shahroz Khan and Zhen Tan and Dawei Li and Song Wang and Tianlong Chen }, journal={arXiv preprint arXiv:2505.18759}, year={ 2025 } }