45
0

Leveraging LLMs to Evaluate Usefulness of Document

Abstract

The conventional Cranfield paradigm struggles to effectively capture user satisfaction due to its weak correlation between relevance and satisfaction, alongside the high costs of relevance annotation in building test collections. To tackle these issues, our research explores the potential of leveraging large language models (LLMs) to generate multilevel usefulness labels for evaluation. We introduce a new user-centric evaluation framework that integrates users' search context and behavioral data into LLMs. This framework uses a cascading judgment structure designed for multilevel usefulness assessments, drawing inspiration from ordinal regression techniques. Our study demonstrates that when well-guided with context and behavioral information, LLMs can accurately evaluate usefulness, allowing our approach to surpass third-party labeling methods. Furthermore, we conduct ablation studies to investigate the influence of key components within the framework. We also apply the labels produced by our method to predict user satisfaction, with real-world experiments indicating that these labels substantially improve the performance of satisfaction prediction models.

View on arXiv
@article{wang2025_2506.08626,
  title={ Leveraging LLMs to Evaluate Usefulness of Document },
  author={ Xingzhu Wang and Erhan Zhang and Yiqun Chen and Jinghan Xuan and Yucheng Hou and Yitong Xu and Ying Nie and Shuaiqiang Wang and Dawei Yin and Jiaxin Mao },
  journal={arXiv preprint arXiv:2506.08626},
  year={ 2025 }
}
Main:9 Pages
5 Figures
Bibliography:2 Pages
13 Tables
Comments on this paper