7
0

Every Rollout Counts: Optimal Resource Allocation for Efficient Test-Time Scaling

Main:9 Pages
7 Figures
Bibliography:4 Pages
1 Tables
Appendix:5 Pages
Abstract

Test-Time Scaling (TTS) improves the performance of Large Language Models (LLMs) by using additional inference-time computation to explore multiple reasoning paths through search. Yet how to allocate a fixed rollout budget most effectively during search remains underexplored, often resulting in inefficient use of compute at test time. To bridge this gap, we formulate test-time search as a resource allocation problem and derive the optimal allocation strategy that maximizes the probability of obtaining a correct solution under a fixed rollout budget. Within this formulation, we reveal a core limitation of existing search methods: solution-level allocation tends to favor reasoning directions with more candidates, leading to theoretically suboptimal and inefficient use of compute. To address this, we propose Direction-Oriented Resource Allocation (DORA), a provably optimal method that mitigates this bias by decoupling direction quality from candidate count and allocating resources at the direction level. To demonstrate DORA's effectiveness, we conduct extensive experiments on challenging mathematical reasoning benchmarks including MATH500, AIME2024, and AIME2025. The empirical results show that DORA consistently outperforms strong baselines with comparable computational cost, achieving state-of-the-art accuracy. We hope our findings contribute to a broader understanding of optimal TTS for LLMs.

View on arXiv
@article{wang2025_2506.15707,
  title={ Every Rollout Counts: Optimal Resource Allocation for Efficient Test-Time Scaling },
  author={ Xinglin Wang and Yiwei Li and Shaoxiong Feng and Peiwen Yuan and Yueqi Zhang and Jiayi Shi and Chuyi Tan and Boyuan Pan and Yao Hu and Kan Li },
  journal={arXiv preprint arXiv:2506.15707},
  year={ 2025 }
}
Comments on this paper