As large language models (LLMs) advance, efficient knowledge evaluation becomes crucial to verifying their capabilities. Traditional methods, relying on benchmarks, face limitations such as high resource costs and information loss. We propose the Large-scale Reference-based Efficient Knowledge Evaluation for Large Language Model (RECKON), which directly uses reference data to evaluate models. RECKON organizes unstructured data into manageable units and generates targeted questions for each cluster, improving evaluation accuracy and efficiency. Experimental results show that RECKON reduces resource consumption by 56.5% compared to traditional methods while achieving over 97% accuracy across various domains, including world knowledge, code, legal, and biomedical datasets. Code is available atthis https URL
View on arXiv@article{zhang2025_2504.00756, title={ RECKON: Large-scale Reference-based Efficient Knowledge Evaluation for Large Language Model }, author={ Lin Zhang and Zhouhong Gu and Xiaoran Shi and Hongwei Feng and Yanghua Xiao }, journal={arXiv preprint arXiv:2504.00756}, year={ 2025 } }