On the Robustness of Agentic Function Calling

Large Language Models (LLMs) are increasingly acting as autonomous agents, with function calling (FC) capabilities enabling them to invoke specific tools for tasks. While prior research has primarily focused on improving FC accuracy, little attention has been given to the robustness of these agents to perturbations in their input. We introduce a benchmark assessing FC robustness in two key areas: resilience to naturalistic query variations, and stability in function calling when the toolkit expands with semantically related tools. Evaluating best-performing FC models on a carefully expanded subset of the Berkeley function calling leaderboard (BFCL), we identify critical weaknesses in existing evaluation methodologies, and highlight areas for improvement in real-world agentic deployments.
View on arXiv@article{rabinovich2025_2504.00914, title={ On the Robustness of Agentic Function Calling }, author={ Ella Rabinovich and Ateret Anaby-Tavor }, journal={arXiv preprint arXiv:2504.00914}, year={ 2025 } }