108

CogToM: A Comprehensive Theory of Mind Benchmark inspired by Human Cognition for Large Language Models

Haibo Tong
Zeyang Yue
Feifei Zhao
Erliang Lin
Lu Jia
Ruolin Chen
Yinqian Sun
Qian Zhang
Yi Zeng
Main:7 Pages
11 Figures
Bibliography:4 Pages
55 Tables
Appendix:29 Pages
Abstract

Whether Large Language Models (LLMs) truly possess human-like Theory of Mind (ToM) capabilities has garnered increasing attention. However, existing benchmarks remain largely restricted to narrow paradigms like false belief tasks, failing to capture the full spectrum of human cognitive mechanisms. We introduce CogToM, a comprehensive, theoretically grounded benchmark comprising over 8000 bilingual instances across 46 paradigms, validated by 49 human annotator.A systematic evaluation of 22 representative models, including frontier models like GPT-5.1 and Qwen3-Max, reveals significant performance heterogeneities and highlights persistent bottlenecks in specific dimensions. Further analysis based on human cognitive patterns suggests potential divergences between LLM and human cognitive structures. CogToM offers a robust instrument and perspective for investigating the evolving cognitive boundaries of LLMs.

View on arXiv
Comments on this paper