Examining Single Sentence Label Leakage in Natural Language Inference
Datasets
Many believe human-level natural language inference (NLI) has already been achieved. In reality, modern NLI benchmarks have serious flaws, rendering progress questionable. Chief among them is the problem of single sentence label leakage, where spurious correlations and biases in datasets enable the accurate prediction of a sentence pair relation from only a single sentence, something that should in principle be impossible. This leakage enables models to cheat rather than learn the desired reasoning capabilities, and hasn't gone away since its 2018 discovery. We analyze this problem across 10 modern NLI datasets, and find that new datasets have a single sentence accuracy of 8% over chance at best and 19% on average. We examine how regular NLI models cheat on this data and discuss how to ameliorate this.
View on arXiv