TourRank: Utilizing Large Language Models for Documents Ranking with a Tournament-Inspired Strategy

Large Language Models (LLMs) are increasingly employed in zero-shot documents ranking, yielding commendable results. However, several significant challenges still persist in LLMs for ranking: (1) LLMs are constrained by limited input length, precluding them from processing a large number of documents simultaneously; (2) The output document sequence is influenced by the input order of documents, resulting in inconsistent ranking outcomes; (3) Achieving a balance between cost and ranking performance is challenging. To tackle these issues, we introduce a novel documents ranking method called TourRank, which is inspired by the sport tournaments, such as FIFA World Cup. Specifically, we 1) overcome the limitation in input length and reduce the ranking latency by incorporating a multi-stage grouping strategy similar to the parallel group stage of sport tournaments; 2) improve the ranking performance and robustness to input orders by using a points system to ensemble multiple ranking results. We test TourRank with different LLMs on the TREC DL datasets and the BEIR benchmark. The experimental results demonstrate that TourRank delivers state-of-the-art performance at a modest cost. The code of TourRank can be seen onthis https URL.
View on arXiv@article{chen2025_2406.11678, title={ TourRank: Utilizing Large Language Models for Documents Ranking with a Tournament-Inspired Strategy }, author={ Yiqun Chen and Qi Liu and Yi Zhang and Weiwei Sun and Xinyu Ma and Wei Yang and Daiting Shi and Jiaxin Mao and Dawei Yin }, journal={arXiv preprint arXiv:2406.11678}, year={ 2025 } }