ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.18069
47
0

Long Is More Important Than Difficult for Training Reasoning Models

23 March 2025
Si Shen
Fei Huang
Zhixiao Zhao
C. Liu
Tiansheng Zheng
Danhao Zhu
    AIMat
    RALM
    LRM
ArXivPDFHTML
Abstract

Difficult problems, which often result in long reasoning traces, are widely recognized as key factors for enhancing the performance of reasoning models. However, such high-challenge problems are scarce, limiting the size of available datasets. In this paper, we propose a simple method to decouple the reliance on problem difficulty. First, we empirically demonstrate that reasoning length, rather than problem difficulty, primarily influences the performance of trained models. Second, we identify a scaling law on reasoning length, showing that model performance increases in a log-linear fashion as the reasoning data length grows. Finally, we introduce a straightforward technique to generate reasoning data of arbitrary length, and show that synthesized data is effective for training reasoning models. After fine-tuning the Qwen2.5-32B-Instruct language model on our Long1K dataset, we present our model, Long1K-32B, which achieves remarkable performance with only 1,000 training samples, achieving 95.6\% accuracy on MATH, and 71.1\% on GPQA outperforming DeepSeek-R1-Distill-Qwen-32B. The model, code, and dataset are all open-sourced, available atthis https URL.

View on arXiv
@article{shen2025_2503.18069,
  title={ Long Is More Important Than Difficult for Training Reasoning Models },
  author={ Si Shen and Fei Huang and Zhixiao Zhao and Chang Liu and Tiansheng Zheng and Danhao Zhu },
  journal={arXiv preprint arXiv:2503.18069},
  year={ 2025 }
}
Comments on this paper