33
0

Intra-neuronal attention within language models Relationships between activation and semantics

Abstract

This study investigates the ability of perceptron-type neurons in language models to perform intra-neuronal attention; that is, to identify different homogeneous categorical segments within the synthetic thought category they encode, based on a segmentation of specific activation zones for the tokens to which they are particularly responsive. The objective of this work is therefore to determine to what extent formal neurons can establish a homomorphic relationship between activation-based and categorical segmentations. The results suggest the existence of such a relationship, albeit tenuous, only at the level of tokens with very high activation levels. This intra-neuronal attention subsequently enables categorical restructuring processes at the level of neurons in the following layer, thereby contributing to the progressive formation of high-level categorical abstractions.

View on arXiv
@article{pichat2025_2503.12992,
  title={ Intra-neuronal attention within language models Relationships between activation and semantics },
  author={ Michael Pichat and William Pogrund and Paloma Pichat and Armanouche Gasparian and Samuel Demarchi and Corbet Alois Georgeon and Michael Veillet-Guillem },
  journal={arXiv preprint arXiv:2503.12992},
  year={ 2025 }
}
Comments on this paper