646
v1v2v3 (latest)

Transductive Conformal Inference for Full Ranking

Main:10 Pages
12 Figures
Bibliography:3 Pages
1 Tables
Appendix:13 Pages
Abstract

We introduce a method based on Conformal Prediction (CP) to quantify the uncertainty of full ranking algorithms. We focus on a specific scenario where n+mn+m items are to be ranked by some ``black box'' algorithm. It is assumed that the relative (ground truth) ranking of nn of them is known. The objective is then to quantify the error made by the algorithm on the ranks of the mm new items among the total (n+m)(n+m). In such a setting, the true ranks of the nn original items in the total (n+m)(n+m) depend on the (unknown) true ranks of the mm new ones. Consequently, we have no direct access to a calibration set to apply a classical CP method. To address this challenge, we propose to construct distribution-free bounds of the unknown conformity scores using recent results on the distribution of conformal p-values. Using these scores upper bounds, we provide valid prediction sets for the rank of any item. We also control the false coverage proportion, a crucial quantity when dealing with multiple prediction sets. Finally, we empirically show on both synthetic and real data the efficiency of our CP method for state-of-the-art algorithms such as RankNet or LambdaMart.

View on arXiv
Comments on this paper