RoBoN: Routed Online Best-of-n for Test-Time Scaling with Multiple LLMs
Best-of- is a widely used test-time scaling approach for LLM inference. Yet despite evidence that LLMs exhibit complementary strengths across tasks, traditionally best-of- relies on a single model to generate responses. We propose RoBoN (Routed Online Best-of-), a sequential multi-LLM alternative to the prevailing single-model best-of-. Given a suite of models , RoBoN sequentially routes generations one-by-one across models, based on scores computed using a reward model and an agreement signal on the predicted responses. This online routing requires no additional training, keeps compute parity, and works with any plug-in reward model. Across reasoning benchmarks (MATH500, OlympiadBench, MinervaMath, GSM8K, MMLU), RoBoN consistently outperforms standard best-of- applied to each individual model for larger , with gains of up to 3.4\% in absolute accuracy, and also improves over a uniform multi-model portfolio baseline. Our results indicate that diversity across models can be exploited at inference to improve best-of- performance over any constituent model alone, providing a simple, training-free path to test-time scaling with multiple LLMs.
View on arXiv