569

Vision Language Models See What You Want but not What You See

Qingying Gao
Yijiang Li
Haiyun Lyu
Haoran Sun
Dezhi Luo
Hokin Deng
Abstract

Knowing others' intentions and taking others' perspectives are two core components of human intelligence typically considered as instantiations of theory of mind. Infiltrating machines with these abilities is an important step towards building human-level artificial intelligence. We here investigate intentionality understanding and perspective-taking in Vision Language Models and, for the purpose, we have created IntentBench and PerspectBench datasets, which contain over 400 cognitive experiments grounded in real-world scenarios and classic cognitive tasks. Surprisingly, we find that VLMs achieve high performance in intentionality understanding but lower performance in perspective-taking using our two datasets. This challenges the common belief in the cognitive science literature that perspective-taking at the corresponding modality is necessary for intentionality understanding. See \href\href{https://growing-ai-like-a-child.github.io/pages/Three%20Mountain%20Task/}{Website}

View on arXiv
Comments on this paper