Toward Generalizable Evaluation in the LLM Era: A Survey Beyond Benchmarks

Large Language Models (LLMs) are advancing at an amazing speed and have become indispensable across academia, industry, and daily applications. To keep pace with the status quo, this survey probes the core challenges that the rise of LLMs poses for evaluation. We identify and analyze two pivotal transitions: (i) from task-specific to capability-based evaluation, which reorganizes benchmarks around core competencies such as knowledge, reasoning, instruction following, multi-modal understanding, and safety; and (ii) from manual to automated evaluation, encompassing dynamic dataset curation and "LLM-as-a-judge" scoring.Yet, even with these transitions, a crucial obstacle persists: the evaluation generalization issue. Bounded test sets cannot scale alongside models whose abilities grow seemingly without limit. We will dissect this issue, along with the core challenges of the above two transitions, from the perspectives of methods, datasets, evaluators, and metrics. Due to the fast evolving of this field, we will maintain a living GitHub repository (links are in each section) to crowd-source updates and corrections, and warmly invite contributors and collaborators.
View on arXiv@article{cao2025_2504.18838, title={ Toward Generalizable Evaluation in the LLM Era: A Survey Beyond Benchmarks }, author={ Yixin Cao and Shibo Hong and Xinze Li and Jiahao Ying and Yubo Ma and Haiyuan Liang and Yantao Liu and Zijun Yao and Xiaozhi Wang and Dan Huang and Wenxuan Zhang and Lifu Huang and Muhao Chen and Lei Hou and Qianru Sun and Xingjun Ma and Zuxuan Wu and Min-Yen Kan and David Lo and Qi Zhang and Heng Ji and Jing Jiang and Juanzi Li and Aixin Sun and Xuanjing Huang and Tat-Seng Chua and Yu-Gang Jiang }, journal={arXiv preprint arXiv:2504.18838}, year={ 2025 } }