ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.10871
42
0

The Representation and Recall of Interwoven Structured Knowledge in LLMs: A Geometric and Layered Analysis

15 February 2025
Ge Lei
Samuel J. Cooper
    KELM
ArXivPDFHTML
Abstract

This study investigates how large language models (LLMs) represent and recall multi-associated attributes across transformer layers. We show that intermediate layers encode factual knowledge by superimposing related attributes in overlapping spaces, along with effective recall even when attributes are not explicitly prompted. In contrast, later layers refine linguistic patterns and progressively separate attribute representations, optimizing task-specific outputs while appropriately narrowing attribute recall. We identify diverse encoding patterns including, for the first time, the observation of 3D spiral structures when exploring information related to the periodic table of elements. Our findings reveal a dynamic transition in attribute representations across layers, contributing to mechanistic interpretability and providing insights for understanding how LLMs handle complex, interrelated knowledge.

View on arXiv
@article{lei2025_2502.10871,
  title={ The Representation and Recall of Interwoven Structured Knowledge in LLMs: A Geometric and Layered Analysis },
  author={ Ge Lei and Samuel J. Cooper },
  journal={arXiv preprint arXiv:2502.10871},
  year={ 2025 }
}
Comments on this paper