110

Reinforced Efficient Reasoning via Semantically Diverse Exploration

Ziqi Zhao
Zhaochun Ren
Jiahong Zou
Liu Yang
Zhiwei Xu
Xuri Ge
Zhumin Chen
Xinyu Ma
Daiting Shi
Shuaiqiang Wang
Dawei Yin
Xin Xin
Main:8 Pages
9 Figures
Bibliography:2 Pages
5 Tables
Appendix:3 Pages
Abstract

Reinforcement learning with verifiable rewards (RLVR) has proven effective in enhancing the reasoning of large language models (LLMs). Monte Carlo Tree Search (MCTS)-based extensions improve upon vanilla RLVR (e.g., GRPO) by providing tree-based reasoning rollouts that enable fine-grained and segment-level credit assignment. However, existing methods still suffer from limited exploration diversity and inefficient reasoning. To address the above challenges, we propose reinforced efficient reasoning via semantically diverse explorations, i.e., ROSE, for LLMs. To encourage more diverse reasoning exploration, our method incorporates a semantic-entropy-based branching strategy and an ε\varepsilon-exploration mechanism. The former operates on already sampled reasoning rollouts to capture semantic uncertainty and select branching points with high semantic divergence to generate new successive reasoning paths, whereas the latter stochastically initiates reasoning rollouts from the root, preventing the search process from becoming overly local. To improve efficiency, we design a length-aware segment-level advantage estimator that rewards concise and correct reasoning while penalizing unnecessarily long reasoning chains. Extensive experiments on various mathematical reasoning benchmarks with Qwen and Llama models validate the effectiveness and efficiency of ROSE. Codes are available atthis https URL.

View on arXiv
Comments on this paper