59
0

LangProBe: a Language Programs Benchmark

Abstract

Composing language models (LMs) into multi-step language programs and automatically optimizing their modular prompts is now a mainstream paradigm for building AI systems, but the tradeoffs in this space have only scarcely been studied before. We introduce LangProBe, the first large-scale benchmark for evaluating the architectures and optimization strategies for language programs, with over 2000 combinations of tasks, architectures, optimizers, and choices of LMs. Using LangProBe, we are the first to study the impact of program architectures and optimizers (and their compositions together and with different models) on tradeoffs of quality and cost. We find that optimized language programs offer strong cost--quality Pareto improvement over raw calls to models, but simultaneously demonstrate that human judgment (or empirical decisions) about which compositions to pursue is still necessary for best performance. We will open source the code and evaluation data for LangProBe.

View on arXiv
@article{tan2025_2502.20315,
  title={ LangProBe: a Language Programs Benchmark },
  author={ Shangyin Tan and Lakshya A Agrawal and Arnav Singhvi and Liheng Lai and Michael J Ryan and Dan Klein and Omar Khattab and Koushik Sen and Matei Zaharia },
  journal={arXiv preprint arXiv:2502.20315},
  year={ 2025 }
}
Comments on this paper