40
1

UQABench: Evaluating User Embedding for Prompting LLMs in Personalized Question Answering

Abstract

Large language models (LLMs) achieve remarkable success in natural language processing (NLP). In practical scenarios like recommendations, as users increasingly seek personalized experiences, it becomes crucial to incorporate user interaction history into the context of LLMs to enhance personalization. However, from a practical utility perspective, user interactions' extensive length and noise present challenges when used directly as text prompts. A promising solution is to compress and distill interactions into compact embeddings, serving as soft prompts to assist LLMs in generating personalized responses. Although this approach brings efficiency, a critical concern emerges: Can user embeddings adequately capture valuable information and prompt LLMs? To address this concern, we propose \name, a benchmark designed to evaluate the effectiveness of user embeddings in prompting LLMs for personalization. We establish a fair and standardized evaluation process, encompassing pre-training, fine-tuning, and evaluation stages. To thoroughly evaluate user embeddings, we design three dimensions of tasks: sequence understanding, action prediction, and interest perception. These evaluation tasks cover the industry's demands in traditional recommendation tasks, such as improving prediction accuracy, and its aspirations for LLM-based methods, such as accurately understanding user interests and enhancing the user experience. We conduct extensive experiments on various state-of-the-art methods for modeling user embeddings. Additionally, we reveal the scaling laws of leveraging user embeddings to prompt LLMs. The benchmark is available online.

View on arXiv
@article{liu2025_2502.19178,
  title={ UQABench: Evaluating User Embedding for Prompting LLMs in Personalized Question Answering },
  author={ Langming Liu and Shilei Liu and Yujin Yuan and Yizhen Zhang and Bencheng Yan and Zhiyuan Zeng and Zihao Wang and Jiaqi Liu and Di Wang and Wenbo Su and Pengjie Wang and Jian Xu and Bo Zheng },
  journal={arXiv preprint arXiv:2502.19178},
  year={ 2025 }
}
Comments on this paper