28
0

On Relation-Specific Neurons in Large Language Models

Abstract

In large language models (LLMs), certain neurons can store distinct pieces of knowledge learned during pretraining. While knowledge typically appears as a combination of relations and entities, it remains unclear whether some neurons focus on a relation itself -- independent of any entity. We hypothesize such neurons detect a relation in the input text and guide generation involving such a relation. To investigate this, we study the Llama-2 family on a chosen set of relations with a statistics-based method. Our experiments demonstrate the existence of relation-specific neurons. We measure the effect of selectively deactivating candidate neurons specific to relation rr on the LLM's ability to handle (1) facts whose relation is rr and (2) facts whose relation is a different relation rrr' \neq r. With respect to their capacity for encoding relation information, we give evidence for the following three properties of relation-specific neurons. (i) Neuron cumulativity.\textbf{(i) Neuron cumulativity.} The neurons for rr present a cumulative effect so that deactivating a larger portion of them results in the degradation of more facts in rr. (ii) Neuron versatility.\textbf{(ii) Neuron versatility.} Neurons can be shared across multiple closely related as well as less related relations. Some relation neurons transfer across languages. (iii) Neuron interference.\textbf{(iii) Neuron interference.} Deactivating neurons specific to one relation can improve LLM generation performance for facts of other relations. We will make our code publicly available atthis https URL.

View on arXiv
@article{liu2025_2502.17355,
  title={ On Relation-Specific Neurons in Large Language Models },
  author={ Yihong Liu and Runsheng Chen and Lea Hirlimann and Ahmad Dawar Hakimi and Mingyang Wang and Amir Hossein Kargaran and Sascha Rothe and François Yvon and Hinrich Schütze },
  journal={arXiv preprint arXiv:2502.17355},
  year={ 2025 }
}
Comments on this paper