61
1

How Should We Build A Benchmark? Revisiting 274 Code-Related Benchmarks For LLMs

Abstract

Various benchmarks have been proposed to assess the performance of large language models (LLMs) in different coding scenarios. We refer to them as code-related benchmarks. However, there are no systematic guidelines by which such a benchmark should be developed to ensure its quality, reliability, and reproducibility. We propose How2Bench, which is comprised of a 55-criteria checklist as a set of guidelines to govern the development of code-related benchmarks comprehensively. Using HOW2BENCH, we profiled 274 benchmarks released within the past decade and found concerning issues. Nearly 70% of the benchmarks did not take measures for data quality assurance; over 10% did not even open source or only partially open source. Many highly cited benchmarks have loopholes, including duplicated samples, incorrect reference codes/tests/prompts, and unremoved sensitive/confidential information. Finally, we conducted a human study involving 49 participants, which revealed significant gaps in awareness of the importance of data quality, reproducibility, and transparency.

View on arXiv
@article{cao2025_2501.10711,
  title={ How Should We Build A Benchmark? Revisiting 274 Code-Related Benchmarks For LLMs },
  author={ Jialun Cao and Yuk-Kit Chan and Zixuan Ling and Wenxuan Wang and Shuqing Li and Mingwei Liu and Ruixi Qiao and Yuting Han and Chaozheng Wang and Boxi Yu and Pinjia He and Shuai Wang and Zibin Zheng and Michael R. Lyu and Shing-Chi Cheung },
  journal={arXiv preprint arXiv:2501.10711},
  year={ 2025 }
}
Comments on this paper