48
2

FastCuRL: Curriculum Reinforcement Learning with Progressive Context Extension for Efficient Training R1-like Reasoning Models

Abstract

Improving the training efficiency remains one of the most significant challenges in large-scale reinforcement learning. In this paper, we investigate how the model's context length and the complexity of the training dataset influence the training process of R1-like models. Our experiments reveal three key insights: (1) adopting longer context lengths may not necessarily result in better performance; (2) selecting an appropriate context length helps mitigate entropy collapse; and (3) appropriately controlling the model's context length and curating training data based on input prompt length can effectively improve RL training efficiency, achieving better performance with shorter thinking length. Inspired by these insights, we propose FastCuRL, a curriculum reinforcement learning framework with the progressive context extension strategy, and successfully accelerate the training process of RL models. Experimental results demonstrate that FastCuRL-1.5B-Preview surpasses DeepScaleR-1.5B-Preview across all five benchmarks while only utilizing 50\% of training steps. Furthermore, all training stages for FastCuRL-1.5B-Preview are completed using a single node with 8 GPUs.

View on arXiv
@article{song2025_2503.17287,
  title={ FastCuRL: Curriculum Reinforcement Learning with Progressive Context Extension for Efficient Training R1-like Reasoning Models },
  author={ Mingyang Song and Mao Zheng and Zheng Li and Wenjie Yang and Xuan Luo and Yue Pan and Feng Zhang },
  journal={arXiv preprint arXiv:2503.17287},
  year={ 2025 }
}
Comments on this paper