When2Call: When (not) to Call Tools

Leveraging external tools is a key feature for modern Language Models (LMs) to expand their capabilities and integrate them into existing systems. However, existing benchmarks primarily focus on the accuracy of tool calling -- whether the correct tool is called with the correct parameters -- and less on evaluating when LMs should (not) call tools. We develop a new benchmark, When2Call, which evaluates tool-calling decision-making: when to generate a tool call, when to ask follow-up questions and when to admit the question can't be answered with the tools provided. We find that state-of-the-art tool-calling LMs show significant room for improvement on When2Call, indicating the importance of this benchmark. We also develop a training set for When2Call and leverage the multiple-choice nature of the benchmark to develop a preference optimization training regime, which shows considerably more improvement than traditional fine-tuning. We release the benchmark and training data as well as evaluation scripts atthis https URL.
View on arXiv@article{ross2025_2504.18851, title={ When2Call: When (not) to Call Tools }, author={ Hayley Ross and Ameya Sunil Mahabaleshwarkar and Yoshi Suhara }, journal={arXiv preprint arXiv:2504.18851}, year={ 2025 } }