ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.05068
99
1
v1v2 (latest)

Does It Make Sense to Speak of Introspection in Large Language Models?

5 June 2025
Iulia M. Comsa
Murray Shanahan
    LRM
ArXiv (abs)PDFHTML
Main:13 Pages
Bibliography:5 Pages
Appendix:4 Pages
Abstract

Large language models (LLMs) exhibit compelling linguistic behaviour, and sometimes offer self-reports, that is to say statements about their own nature, inner workings, or behaviour. In humans, such reports are often attributed to a faculty of introspection and are typically linked to consciousness. This raises the question of how to interpret self-reports produced by LLMs, given their increasing linguistic fluency and cognitive capabilities. To what extent (if any) can the concept of introspection be meaningfully applied to LLMs? Here, we present and critique two examples of apparent introspective self-report from LLMs. In the first example, an LLM attempts to describe the process behind its own ``creative'' writing, and we argue this is not a valid example of introspection. In the second example, an LLM correctly infers the value of its own temperature parameter, and we argue that this can be legitimately considered a minimal example of introspection, albeit one that is (presumably) not accompanied by conscious experience.

View on arXiv
@article{comsa2025_2506.05068,
  title={ Does It Make Sense to Speak of Introspection in Large Language Models? },
  author={ Iulia M. Comsa and Murray Shanahan },
  journal={arXiv preprint arXiv:2506.05068},
  year={ 2025 }
}
Comments on this paper