108

Simpson's Bias in NLP Training

AAAI Conference on Artificial Intelligence (AAAI), 2021
Abstract

In most machine learning tasks, we evaluate a model MM on a given data population SS by measuring a population-level metric F(S;M)F(S;M). Examples of such evaluation metric FF include precision/recall for (binary) recognition, the F1 score for multi-class classification, and the BLEU metric for language generation. On the other hand, the model MM is trained by optimizing a sample-level loss G(St;M)G(S_t;M) at each learning step tt, where StS_t is a subset of SS (a.k.a. the mini-batch). Popular choices of GG include cross-entropy loss, the Dice loss, and sentence-level BLEU scores. A fundamental assumption behind this paradigm is that the mean value of the sample-level loss GG, if averaged over all possible samples, should effectively represent the population-level metric FF of the task, such as, that E[G(St;M)]F(S;M)\mathbb{E}[ G(S_t;M) ] \approx F(S;M). In this paper, we systematically investigate the above assumption in several NLP tasks. We show, both theoretically and experimentally, that some popular designs of the sample-level loss GG may be inconsistent with the true population-level metric FF of the task, so that models trained to optimize the former can be substantially sub-optimal to the latter, a phenomenon we call it, Simpson's bias, due to its deep connections with the classic paradox known as Simpson's reversal paradox in statistics and social sciences.

View on arXiv
Comments on this paper