118
0

Evaluation is All You Need: Strategic Overclaiming of LLM Reasoning Capabilities Through Evaluation Design

Main:9 Pages
7 Figures
Bibliography:3 Pages
13 Tables
Appendix:9 Pages
Abstract

Reasoning models represented by the Deepseek-R1-Distill series have been widely adopted by the open-source community due to their strong performance in mathematics, science, programming, and other domains. However, our study reveals that their benchmark evaluation results are subject to significant fluctuations caused by various factors. Subtle differences in evaluation conditions can lead to substantial variations in results. Similar phenomena are observed in other open-source inference models fine-tuned based on the Deepseek-R1-Distill series, as well as in the QwQ-32B model, making their claimed performance improvements difficult to reproduce reliably. Therefore, we advocate for the establishment of a more rigorous paradigm for model performance evaluation and present our empirical assessments of the Deepseek-R1-Distill series models.

View on arXiv
@article{sun2025_2506.04734,
  title={ Evaluation is All You Need: Strategic Overclaiming of LLM Reasoning Capabilities Through Evaluation Design },
  author={ Lin Sun and Weihong Lin and Jinzhu Wu and Yongfu Zhu and Xiaoqi Jian and Guangxiang Zhao and Change Jia and Linglin Zhang and Sai-er Hu and Yuhan Wu and Xiangzheng Zhang },
  journal={arXiv preprint arXiv:2506.04734},
  year={ 2025 }
}
Comments on this paper