A lively ongoing debate is taking place, since the extraordinary emergence of Large Language Models (LLMs) with regards to their capability to understand the world and capture the meaning of the dialogues in which they are involved. Arguments and counter-arguments have been proposed based upon thought experiments, anecdotal conversations between LLMs and humans, statistical linguistic analysis, philosophical considerations, and more. In this brief paper we present a counter-argument based upon a thought experiment and semi-formal considerations leading to an inherent ambiguity barrier which prevents LLMs from having any understanding of what their amazingly fluent dialogues mean.
View on arXiv@article{nissani2025_2505.00654, title={ Large Language Models Understanding: an Inherent Ambiguity Barrier }, author={ Daniel N. Nissani }, journal={arXiv preprint arXiv:2505.00654}, year={ 2025 } }