8
0

Confidence and Stability of Global and Pairwise Scores in NLP Evaluation

Georgii Levtsov
Dmitry Ustalov
Main:7 Pages
4 Figures
Bibliography:3 Pages
7 Tables
Appendix:3 Pages
Abstract

With the advent of highly capable instruction-tuned neural language models, benchmarking in natural language processing (NLP) is increasingly shifting towards pairwise comparison leaderboards, such as LMSYS Arena, from traditional global pointwise scores (e.g., GLUE, BIG-bench, SWE-bench). This paper empirically investigates the strengths and weaknesses of both global scores and pairwise comparisons to aid decision-making in selecting appropriate model evaluation strategies. Through computational experiments on synthetic and real-world datasets using standard global metrics and the popular Bradley-Terry model for pairwise comparisons, we found that while global scores provide more reliable overall rankings, they can underestimate strong models with rare, significant errors or low confidence. Conversely, pairwise comparisons are particularly effective for identifying strong contenders among models with lower global scores, especially where quality metrics are hard to define (e.g., text generation), though they require more comparisons to converge if ties are frequent. Our code and data are available atthis https URLunder a permissive license.

View on arXiv
@article{levtsov2025_2507.01633,
  title={ Confidence and Stability of Global and Pairwise Scores in NLP Evaluation },
  author={ Georgii Levtsov and Dmitry Ustalov },
  journal={arXiv preprint arXiv:2507.01633},
  year={ 2025 }
}
Comments on this paper