Learning-to-Rank with Partitioned Preference: Fast Estimation for the Plackett-Luce Model

We investigate the Plackett-Luce (PL) model based listwise learning-to-rank (LTR) on data with partitioned preference, where a set of items are sliced into ordered and disjoint partitions, but the ranking of items within a partition is unknown. Given items with partitions, calculating the likelihood of data with partitioned preference under the PL model has a time complexity of , where is the maximum size of the top partitions. This computational challenge restrains most existing PL-based listwise LTR methods to a special case of partitioned preference, top- ranking, where the exact order of the top items is known. In this paper, we exploit a random utility model formulation of the PL model, and propose an efficient numerical integration approach for calculating the likelihood and its gradients with a time complexity . We demonstrate that the proposed method outperforms well-known LTR baselines and remains scalable through both simulation experiments and applications to real-world eXtreme Multi-Label classification tasks.
View on arXiv