Benchmark contamination has become a significant concern in the LLM evaluation community. Previous Agents-as-an-Evaluator address this issue by involving agents in the generation of questions. Despite their success, the biases in Agents-as-an-Evaluator methods remain largely unexplored. In this paper, we present a theoretical formulation of evaluation bias, providing valuable insights into designing unbiased evaluation protocols. Furthermore, we identify two type of bias in Agents-as-an-Evaluator through carefully designed probing tasks on a minimal Agents-as-an-Evaluator setup. To address these issues, we propose the Unbiased Evaluator, an evaluation protocol that delivers a more comprehensive, unbiased, and interpretable assessment ofthis http URLexperiments reveal significant room for improvement in current LLMs. Additionally, we demonstrate that the Unbiased Evaluator not only offers strong evidence of benchmark contamination but also provides interpretable evaluation results.
View on arXiv@article{chen2025_2502.06655, title={ Unbiased Evaluation of Large Language Models from a Causal Perspective }, author={ Meilin Chen and Jian Tian and Liang Ma and Di Xie and Weijie Chen and Jiang Zhu }, journal={arXiv preprint arXiv:2502.06655}, year={ 2025 } }