88

Best-of-Majority: Minimax-Optimal Strategy for Pass@kk Inference Scaling

Main:23 Pages
3 Figures
Bibliography:6 Pages
1 Tables
Abstract

LLM inference often generates a batch of candidates for a prompt and selects one via strategies like majority voting or Best-of- N (BoN). For difficult tasks, this single-shot selection often underperforms. Consequently, evaluations commonly report Pass@kk: the agent may submit up to kk responses, and only the best of them is used when computing regret. Motivated by this, we study inference scaling in the more general Pass@kk inference setting, and prove that neither majority voting nor BoN exhibits the desirable scaling with kk and the sampling budget NN. Combining the advantages of majority voting and BoN, we propose a new inference strategy called Best-of-Majority (BoM), with a pivotal step that restricts the candidates to the responses with high frequency in the NN samples before selecting the top-kk rewards. We prove that when the sampling budget is N=Ω~(C)N=\tilde\Omega(C^*), the regret of BoM is O(ϵopt+ϵRM2C/k)O(\epsilon_{\mathrm{opt}}+\sqrt{\epsilon_{\mathrm{RM}}^2C^*/k}), where CC^* is the coverage coefficient, ϵRM\epsilon_{\mathrm{RM}} is the estimation error of the reward model, and ϵopt\epsilon_{\mathrm{opt}} is the estimation error of reward at the optimal response. We further establish a matching lower bound, certifying that our algorithm is minimax optimal. Beyond optimality, BoM has a key advantage: unlike majority voting and BoN, its performance does not degrade when increasing NN. Experimental results of inference on math problems show BoM outperforming both majority voting and BoN.

View on arXiv
Comments on this paper