Characterizing Knowledge Graph Tasks in LLM Benchmarks Using Cognitive Complexity Frameworks
International Conference on Semantic Systems (i-Semantics), 2025
Main:4 Pages
Bibliography:1 Pages
Appendix:1 Pages
Abstract
Large Language Models (LLMs) are increasingly used for tasks involving Knowledge Graphs (KGs), whose evaluation typically focuses on accuracy and output correctness. We propose a complementary task characterization approach using three complexity frameworks from cognitive psychology. Applying this to the LLM-KG-Bench framework, we highlight value distributions, identify underrepresented demands and motivate richer interpretation and diversity for benchmark evaluation tasks.
View on arXivComments on this paper
