ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.11795
6
3

Simpson's Bias in NLP Training

13 March 2021
Fei Yuan
Longtu Zhang
Bojun Huang
Yaobo Liang
    AI4CE
ArXivPDFHTML
Abstract

In most machine learning tasks, we evaluate a model MMM on a given data population SSS by measuring a population-level metric F(S;M)F(S;M)F(S;M). Examples of such evaluation metric FFF include precision/recall for (binary) recognition, the F1 score for multi-class classification, and the BLEU metric for language generation. On the other hand, the model MMM is trained by optimizing a sample-level loss G(St;M)G(S_t;M)G(St​;M) at each learning step ttt, where StS_tSt​ is a subset of SSS (a.k.a. the mini-batch). Popular choices of GGG include cross-entropy loss, the Dice loss, and sentence-level BLEU scores. A fundamental assumption behind this paradigm is that the mean value of the sample-level loss GGG, if averaged over all possible samples, should effectively represent the population-level metric FFF of the task, such as, that E[G(St;M)]≈F(S;M)\mathbb{E}[ G(S_t;M) ] \approx F(S;M)E[G(St​;M)]≈F(S;M). In this paper, we systematically investigate the above assumption in several NLP tasks. We show, both theoretically and experimentally, that some popular designs of the sample-level loss GGG may be inconsistent with the true population-level metric FFF of the task, so that models trained to optimize the former can be substantially sub-optimal to the latter, a phenomenon we call it, Simpson's bias, due to its deep connections with the classic paradox known as Simpson's reversal paradox in statistics and social sciences.

View on arXiv
Comments on this paper