On Speeding Up Language Model Evaluation

Developing prompt-based methods with Large Language Models (LLMs) requires making numerous decisions, which give rise to a combinatorial search problem over hyper-parameters. This exhaustive evaluation can be time-consuming and costly. In this paper, we propose an approach to explore this space. We are exploiting the fact that often only few samples are needed to identify clearly superior or inferior settings, and that many evaluation tests are highly correlated. We lean on multi-armed bandits to sequentially identify the next (method, validation sample)-pair to evaluate and utilize low-rank matrix factorization to fill in missing evaluations. We carefully assess the efficacy of our approach on several competitive benchmark problems and show that it can identify the top-performing method using only 5-15% of the typical resources -- resulting in 85-95% LLM cost savings. Our code is available atthis https URL.
View on arXiv@article{zhou2025_2407.06172, title={ On Speeding Up Language Model Evaluation }, author={ Jin Peng Zhou and Christian K. Belardi and Ruihan Wu and Travis Zhang and Carla P. Gomes and Wen Sun and Kilian Q. Weinberger }, journal={arXiv preprint arXiv:2407.06172}, year={ 2025 } }