182
v1v2 (latest)

Beyond statistical significance: Quantifying uncertainty and statistical variability in multilingual and multitask NLP evaluation

Main:9 Pages
2 Figures
Bibliography:3 Pages
12 Tables
Appendix:6 Pages
Abstract

We introduce a set of resampling-based methods for quantifying uncertainty and statistical precision of evaluation metrics in multilingual and/or multitask NLP benchmarks. We show how experimental variation in performance scores arises from both model and data-related sources, and that accounting for both of them is necessary to avoid substantially underestimating the overall variability over hypothetical replications. Using multilingual question answering, machine translation, and named entity recognition as example tasks, we also demonstrate how resampling methods are useful for quantifying the replication uncertainty of various quantities used in leaderboards such as model rankings and pairwise differences between models.

View on arXiv
Comments on this paper