In most machine learning tasks, we evaluate a model on a given data population by measuring a population-level metric . Examples of such evaluation metric include precision/recall for (binary) recognition, the F1 score for multi-class classification, and the BLEU metric for language generation. On the other hand, the model is trained by optimizing a sample-level loss at each learning step , where is a subset of (a.k.a. the mini-batch). Popular choices of include cross-entropy loss, the Dice loss, and sentence-level BLEU scores. A fundamental assumption behind this paradigm is that the mean value of the sample-level loss , if averaged over all possible samples, should effectively represent the population-level metric of the task, such as, that . In this paper, we systematically investigate the above assumption in several NLP tasks. We show, both theoretically and experimentally, that some popular designs of the sample-level loss may be inconsistent with the true population-level metric of the task, so that models trained to optimize the former can be substantially sub-optimal to the latter, a phenomenon we call it, Simpson's bias, due to its deep connections with the classic paradox known as Simpson's reversal paradox in statistics and social sciences.
View on arXiv