148
v1v2 (latest)

CPPO: Accelerating the Training of Group Relative Policy Optimization-Based Reasoning Models

Computer Vision and Pattern Recognition (CVPR), 2024
Main:10 Pages
6 Figures
Bibliography:2 Pages
15 Tables
Appendix:7 Pages
Abstract

This paper introduces Completion Pruning Policy Optimization (CPPO) to accelerate the training of reasoning models based on Group Relative Policy Optimization (GRPO). GRPO, while effective, incurs high training costs due to the need to sample multiple completions for each question. Our experiment and theoretical analysis reveal that the number of completions impacts model accuracy yet increases training time multiplicatively, and not all completions contribute equally to policy training -- their contribution depends on their relative advantage. To address these issues, we propose CPPO, which prunes completions with low absolute advantages, significantly reducing the number needed for gradient calculation and updates. Additionally, we introduce a dynamic completion allocation strategy to maximize GPU utilization by incorporating additional questions, further enhancing training efficiency. Experiments show that CPPO achieves up to 7.98×7.98\times speedup on GSM8K and 3.48×3.48\times on Math while preserving or even enhancing the accuracy compared to the original GRPO. We release our code at \href{this https URL}{this https URL}.

View on arXiv
Comments on this paper