152
v1v2v3 (latest)

League of LLMs: A Benchmark-Free Paradigm for Mutual Evaluation of Large Language Models

Main:8 Pages
16 Figures
Bibliography:3 Pages
6 Tables
Appendix:10 Pages
Abstract

Although large language models (LLMs) have shown exceptional capabilities across a wide range of tasks, reliable evaluation remains a critical challenge due to data contamination, opaque operation, and subjective preferences. To address these issues, we propose League of LLMs (LOL), a novel benchmark-free evaluation paradigm that organizes multiple LLMs into a self-governed league for multi-round mutual evaluation. LOL integrates four core criteria (dynamic, transparent, objective, and professional) to mitigate key limitations of existing paradigms. Experiments on eight mainstream LLMs in mathematics and programming demonstrate that LOL can effectively distinguish LLM capabilities while maintaining high internal ranking stability (Top-kk consistency =70.7%= 70.7\%). Beyond ranking, LOL reveals empirical findings that are difficult for traditional paradigms to capture. For instance, ``memorization-based answering'' behaviors are observed in some models, and a statistically significant homophily bias is found within the OpenAI family (Δ=9\Delta = 9, p<0.05p < 0.05). Finally, we make our framework and code publicly available as a valuable complement to the current LLM evaluation ecosystem.

View on arXiv
Comments on this paper