LLM-VPRF: Large Language Model Based Vector Pseudo Relevance Feedback

Vector Pseudo Relevance Feedback (VPRF) has shown promising results in improving BERT-based dense retrieval systems through iterative refinement of query representations. This paper investigates the generalizability of VPRF to Large Language Model (LLM) based dense retrievers. We introduce LLM-VPRF and evaluate its effectiveness across multiple benchmark datasets, analyzing how different LLMs impact the feedback mechanism. Our results demonstrate that VPRF's benefits successfully extend to LLM architectures, establishing it as a robust technique for enhancing dense retrieval performance regardless of the underlying models. This work bridges the gap between VPRF with traditional BERT-based dense retrievers and modern LLMs, while providing insights into their future directions.
View on arXiv@article{li2025_2504.01448, title={ LLM-VPRF: Large Language Model Based Vector Pseudo Relevance Feedback }, author={ Hang Li and Shengyao Zhuang and Bevan Koopman and Guido Zuccon }, journal={arXiv preprint arXiv:2504.01448}, year={ 2025 } }