96

Benchmark^2: Systematic Evaluation of LLM Benchmarks

Qi Qian
Chengsong Huang
Jingwen Xu
Changze Lv
Muling Wu
Wenhao Liu
Xiaohua Wang
Zhenghua Wang
Zisu Huang
Muzhao Tian
Jianhan Xu
Kun Hu
He-Da Wang
Yao Hu
Xuanjing Huang
Xiaoqing Zheng
Main:8 Pages
3 Figures
Bibliography:2 Pages
18 Tables
Appendix:4 Pages
Abstract

The rapid proliferation of benchmarks for evaluating large language models (LLMs) has created an urgent need for systematic methods to assess benchmark quality itself. We propose Benchmark^2, a comprehensive framework comprising three complementary metrics: (1) Cross-Benchmark Ranking Consistency, measuring whether a benchmark produces model rankings aligned with peer benchmarks; (2) Discriminability Score, quantifying a benchmark's ability to differentiate between models; and (3) Capability Alignment Deviation, identifying problematic instances where stronger models fail but weaker models succeed within the same model family. We conduct extensive experiments across 15 benchmarks spanning mathematics, reasoning, and knowledge domains, evaluating 11 LLMs across four model families. Our analysis reveals significant quality variations among existing benchmarks and demonstrates that selective benchmark construction based on our metrics can achieve comparable evaluation performance with substantially reduced test sets.

View on arXiv
Comments on this paper