VAPO: Efficient and Reliable Reinforcement Learning for Advanced Reasoning Tasks

We present VAPO, Value-based Augmented Proximal Policy Optimization framework for reasoning models., a novel framework tailored for reasoning models within the value-based paradigm. Benchmarked the AIME 2024 dataset, VAPO, built on the Qwen 32B pre-trained model, attains a state-of-the-art score of . In direct comparison under identical experimental settings, VAPO outperforms the previously reported results of DeepSeek-R1-Zero-Qwen-32B and DAPO by more than 10 points. The training process of VAPO stands out for its stability and efficiency. It reaches state-of-the-art performance within a mere 5,000 steps. Moreover, across multiple independent runs, no training crashes occur, underscoring its reliability. This research delves into long chain-of-thought (long-CoT) reasoning using a value-based reinforcement learning framework. We pinpoint three key challenges that plague value-based methods: value model bias, the presence of heterogeneous sequence lengths, and the sparsity of reward signals. Through systematic design, VAPO offers an integrated solution that effectively alleviates these challenges, enabling enhanced performance in long-CoT reasoning tasks.
View on arXiv@article{yue2025_2504.05118, title={ VAPO: Efficient and Reliable Reinforcement Learning for Advanced Reasoning Tasks }, author={ Yu Yue and Yufeng Yuan and Qiying Yu and Xiaochen Zuo and Ruofei Zhu and Wenyuan Xu and Jiaze Chen and Chengyi Wang and TianTian Fan and Zhengyin Du and Xiangpeng Wei and Xiangyu Yu and Gaohong Liu and Juncai Liu and Lingjun Liu and Haibin Lin and Zhiqi Lin and Bole Ma and Chi Zhang and Mofan Zhang and Wang Zhang and Hang Zhu and Ru Zhang and Xin Liu and Mingxuan Wang and Yonghui Wu and Lin Yan }, journal={arXiv preprint arXiv:2504.05118}, year={ 2025 } }