Online Ranking Given Discrete Choice Feedback
Given a set of objects, an online ranking system outputs at each time step a full ranking of the set, observes a feedback of some form and suffers a loss. We study the setting in which the (adversarial) feedback is a choice of an item from , and the loss is the position (1st, 2nd, 3rd...) of the item in the outputted ranking. For this simple problem we present an algorithm of expected regret for a time horizon of steps, with respect to the best single ranking in hindsight. This improves on previous known algorithms in two ways: (i) it shaves off a factor in the expected regret bound, and (ii) it is extremely simple to implement, compared to previous algorithms (some of which it is not even clear how to execute in sub-exponential time). Our algorithm works for a more general class of ranking problems in which the feedback is a vector of values of elements in , and the loss is the sum of magnitudes of pairwise inversions (also known as AUC in the literature). The main tool is the use of randomized sorting algorithms that, restricted to any fixed pair of items, gives rise to a multiplicative weights update scheme on a binary action set consisting of their two ordering possibilities.
View on arXiv