ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.16516
93
0

HammerBench: Fine-Grained Function-Calling Evaluation in Real Mobile Device Scenarios

21 December 2024
Jun Wang
Jiamu Zhou
Muning Wen
Xiaoyun Mo
H. Zhang
Qiqiang Lin
Cheng Jin
Xihuai Wang
Weinan Zhang
Qiuying Peng
J. Wang
    LLMAG
ArXivPDFHTML
Abstract

Evaluating the performance of LLMs in multi-turn human-agent interactions presents significant challenges, particularly due to the complexity and variability of user behavior. In this paper, we introduce HammerBench, a novel benchmark framework for assessing LLMs' function-calling capabilities in real-world, multi-turn dialogues. HammerBench simulates diverse mobile assistant use cases, incorporating imperfect instructions, dynamic question-answer trajectories, intent and argument shifts, and the indirect use of external information through pronouns. To construct this benchmark, we curate a comprehensive dataset derived from popular mobile app functionalities and anonymized user logs, complemented by a cost-effective data generation pipeline leveraging open-source models. HammerBench is further augmented with fine-grained interaction snapshots and metrics, enabling detailed evaluation of function-calling performance across individual conversational turns. We demonstrate the effectiveness of HammerBench by evaluating several leading LLMs and uncovering key performance trends. Our experiments reveal that different types of parameter name errors are a significant source of failure across different interaction scenarios, highlighting critical areas for further improvement in LLM robustness for mobile assistant applications.

View on arXiv
@article{wang2025_2412.16516,
  title={ HammerBench: Fine-Grained Function-Calling Evaluation in Real Mobile Device Scenarios },
  author={ Jun Wang and Jiamu Zhou and Muning Wen and Xiaoyun Mo and Haoyu Zhang and Qiqiang Lin and Cheng Jin and Xihuai Wang and Weinan Zhang and Qiuying Peng and Jun Wang },
  journal={arXiv preprint arXiv:2412.16516},
  year={ 2025 }
}
Comments on this paper