64

Contextual Rollout Bandits for Reinforcement Learning with Verifiable Rewards

Xiaodong Lu
Xiaohan Wang
Jiajun Chai
Guojun Yin
Wei Lin
Zhijun Chen
Yu Luo
Fuzhen Zhuang
Yikun Ban
Deqing Wang
Main:8 Pages
13 Figures
Bibliography:3 Pages
5 Tables
Appendix:14 Pages
Abstract

Reinforcement Learning with Verifiable Rewards (RLVR) is an effective paradigm for improving the reasoning capabilities of large language models. However, existing RLVR methods utilize rollouts in an indiscriminate and short-horizon manner: responses of heterogeneous quality within each prompt are treated uniformly, and historical rollouts are discarded after a single use. This leads to noisy supervision, poor sample efficiency, and suboptimal policy updates. We address these issues by formulating rollout scheduling in RLVR as a contextual bandit problem and proposing a unified neural scheduling framework that adaptively selects high-value rollouts throughout training. Each rollout is treated as an arm whose reward is defined by the induced performance gain between consecutive optimization steps. The resulting scheduler supports both noise-aware intra-group selection and adaptive global reuse of historical rollouts within a single principled framework. We provide theoretical justification by deriving sublinear regret bounds and showing that enlarging the rollout buffer improves the achievable performance upper bound. Experiments on six mathematical reasoning benchmarks demonstrate consistent gains in performance and training efficiency across multiple RLVR optimization methods.

View on arXiv
Comments on this paper