Benchmarking Failures in Tool-Augmented Language Models

The integration of tools has extended the capabilities of language models (LMs) beyond vanilla text generation to versatile scenarios. However, tool-augmented language models (TaLMs) often assume 'perfect' information access and tool availability, which may not hold in the real world. To systematically study TaLMs' imperfections, we introduce the FAIL-TALMS benchmark, featuring two major failures: under-specified user queries and non-available tools. FAIL-TALMS contains 1,749 examples using 906 tools across 21 categories, including single- and multi-tool usage. We evaluate top-performing proprietary and open-source models, and find all current models except for Claude struggle to recognize missing tools or information. Further, to study possible mitigation of the failures, we enable real-time human interaction, named the Ask-and-Help (AAH) method, to provide missing information or replace non-functional tools. While AAH can help models solve tasks more correctly when queries are under-specified, it brings minimal benefit when complex tools are broken.
View on arXiv@article{treviño2025_2503.14227, title={ Benchmarking Failures in Tool-Augmented Language Models }, author={ Eduardo Treviño and Hugo Contant and James Ngai and Graham Neubig and Zora Zhiruo Wang }, journal={arXiv preprint arXiv:2503.14227}, year={ 2025 } }