Information Measures, Experiments, Multi-category Hypothesis Tests, and
Surrogate Losses
We provide a unifying view of statistical information measures, multi-class classification problems, multi-way Bayesian hypothesis testing, and loss functions, elaborating equivalence results between all of these objects. In particular, we consider a particular generalization of -divergences to multiple distributions, and we show that there is a constructive equivalence between -divergences, statistical information (in the sense of uncertainty as elaborated by DeGroot), and loss functions for multi-category classification. We also study an extension of our results to multi-class classification problems in which we must both infer a discriminant function and a data representation (or, in the setting of a hypothesis testing problem, an experimental design), represented by a quantizer from a family of possible quantizers . There, we give a complete characterization of the equivalence between loss functions, meaning that optimizing either of two losses yields the same optimal discriminant and quantizer . A main consequence of our results is to describe those convex loss functions that are Fisher consistent for jointly choosing a data representation and minimizing the (weighted) probability of error in multi-category classification and hypothesis testing problems.
View on arXiv