50
1

Evaluating Personalized Tool-Augmented LLMs from the Perspectives of Personalization and Proactivity

Abstract

Personalized tool utilization is essential for aligning large language models (LLMs) with user preference in interaction scenarios with various tools. However, most of the current benchmarks primarily focus on either personalization of text generation or direct tool-utilizing, without considering both. In this work, we introduce a novel benchmark ETAPP for evaluating personalized tool invocation, establishing a sandbox environment, and a comprehensive dataset of 800 testing cases covering diverse user profiles. To improve the accuracy of our evaluation, we propose a key-point-based LLM evaluation method, mitigating biases in the LLM-as-a-judge system by manually annotating key points for each test case and providing them to LLM as the reference. Additionally, we evaluate the excellent LLMs and provide an in-depth analysis. Furthermore, we investigate the impact of different tool-invoking strategies on LLMs' personalization performance and the effects of fine-tuning in our task. The effectiveness of our preference-setting and key-point-based evaluation method is also validated. Our findings offer insights into improving personalized LLM agents. Our Code is available atthis https URL.

View on arXiv
@article{hao2025_2503.00771,
  title={ Evaluating Personalized Tool-Augmented LLMs from the Perspectives of Personalization and Proactivity },
  author={ Yupu Hao and Pengfei Cao and Zhuoran Jin and Huanxuan Liao and Yubo Chen and Kang Liu and Jun Zhao },
  journal={arXiv preprint arXiv:2503.00771},
  year={ 2025 }
}
Comments on this paper