27
6

MetaLLM: A High-performant and Cost-efficient Dynamic Framework for Wrapping LLMs

Abstract

The rapid progress in machine learning (ML) has brought forth many large language models (LLMs) that excel in various tasks and areas. These LLMs come with different abilities and costs in terms of computation or pricing. Since the demand for each query can vary, e.g., because of the queried domain or its complexity, defaulting to one LLM in an application is not usually the best choice, whether it is the biggest, priciest, or even the one with the best average test performance. Consequently, picking the right LLM that is both accurate and cost-effective for an application is necessary yet remains a challenge. In this paper, we introduce MetaLLM, a framework that dynamically and intelligently routes each query to the optimal LLM (among several available LLMs) for classification and multi-choice question-answering tasks, achieving significantly improved accuracy and cost-effectiveness. By framing the selection problem as a multi-armed bandit, MetaLLM balances prediction accuracy and cost efficiency under uncertainty. Our experiments, conducted on popular LLM platforms such as OpenAI and Together AI, as well as open-source LLM, showcase MetaLLM's efficacy in real-world scenarios, laying the groundwork for future extensions.

View on arXiv
@article{nguyen2025_2407.10834,
  title={ MetaLLM: A High-performant and Cost-efficient Dynamic Framework for Wrapping LLMs },
  author={ Quang H. Nguyen and Thinh Dao and Duy C. Hoang and Juliette Decugis and Saurav Manchanda and Nitesh V. Chawla and Khoa D. Doan },
  journal={arXiv preprint arXiv:2407.10834},
  year={ 2025 }
}
Comments on this paper