ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.02701
66
0
v1v2v3 (latest)

On Entity Identification in Language Models

3 June 2025
Masaki Sakata
Sho Yokoi
Benjamin Heinzerling
Takumi Ito
Kentaro Inui
ArXiv (abs)PDFHTML
Main:9 Pages
15 Figures
Bibliography:4 Pages
11 Tables
Appendix:13 Pages
Abstract

We analyze the extent to which internal representations of language models (LMs) identify and distinguish mentions of named entities, focusing on the many-to-many correspondence between entities and their mentions. We first formulate two problems of entity mentions -- ambiguity and variability -- and propose a framework analogous to clustering quality metrics. Specifically, we quantify through cluster analysis of LM internal representations the extent to which mentions of the same entity cluster together and mentions of different entities remain separated. Our experiments examine five Transformer-based autoregressive models, showing that they effectively identify and distinguish entities with metrics analogous to precision and recall ranging from 0.66 to 0.9. Further analysis reveals that entity-related information is compactly represented in a low-dimensional linear subspace at early LM layers. Additionally, we clarify how the characteristics of entity representations influence word prediction performance. These findings are interpreted through the lens of isomorphism between LM representations and entity-centric knowledge structures in the real world, providing insights into how LMs internally organize and use entity information.

View on arXiv
@article{sakata2025_2506.02701,
  title={ On Entity Identification in Language Models },
  author={ Masaki Sakata and Benjamin Heinzerling and Sho Yokoi and Takumi Ito and Kentaro Inui },
  journal={arXiv preprint arXiv:2506.02701},
  year={ 2025 }
}
Comments on this paper