We present a detailed study of top- classification, the task of predicting the most probable classes for an input, extending beyond single-class prediction. We demonstrate that several prevalent surrogate loss functions in multi-class classification, such as comp-sum and constrained losses, are supported by -consistency bounds with respect to the top- loss. These bounds guarantee consistency in relation to the hypothesis set , providing stronger guarantees than Bayes-consistency due to their non-asymptotic and hypothesis-set specific nature. To address the trade-off between accuracy and cardinality , we further introduce cardinality-aware loss functions through instance-dependent cost-sensitive learning. For these functions, we derive cost-sensitive comp-sum and constrained surrogate losses, establishing their -consistency bounds and Bayes-consistency. Minimizing these losses leads to new cardinality-aware algorithms for top- classification. We report the results of extensive experiments on CIFAR-100, ImageNet, CIFAR-10, and SVHN datasets demonstrating the effectiveness and benefit of these algorithms.
View on arXiv