10
0

Can structural correspondences ground real world representational content in Large Language Models?

Main:32 Pages
1 Figures
Abstract

Large Language Models (LLMs) such as GPT-4 produce compelling responses to a wide range of prompts. But their representational capacities are uncertain. Many LLMs have no direct contact with extra-linguistic reality: their inputs, outputs and training data consist solely of text, raising the questions (1) can LLMs represent anything and (2) if so, what? In this paper, I explore what it would take to answer these questions according to a structural-correspondence based account of representation, and make an initial survey of this evidence. I argue that the mere existence of structural correspondences between LLMs and worldly entities is insufficient to ground representation of those entities. However, if these structural correspondences play an appropriate role - they are exploited in a way that explains successful task performance - then they could ground real world contents. This requires overcoming a challenge: the text-boundedness of LLMs appears, on the face of it, to prevent them engaging in the right sorts of tasks.

View on arXiv
@article{williams2025_2506.16370,
  title={ Can structural correspondences ground real world representational content in Large Language Models? },
  author={ Iwan Williams },
  journal={arXiv preprint arXiv:2506.16370},
  year={ 2025 }
}
Comments on this paper