31
0

Brain-Inspired Exploration of Functional Networks and Key Neurons in Large Language Models

Abstract

In recent years, the rapid advancement of large language models (LLMs) in natural language processing has sparked significant interest among researchers to understand their mechanisms and functional characteristics. Although existing studies have attempted to explain LLM functionalities by identifying and interpreting specific neurons, these efforts mostly focus on individual neuron contributions, neglecting the fact that human brain functions are realized through intricate interaction networks. Inspired by cognitive neuroscience research on functional brain networks (FBNs), this study introduces a novel approach to investigate whether similar functional networks exist within LLMs. We use methods similar to those in the field of functional neuroimaging analysis to locate and identify functional networks in LLM. Experimental results show that, similar to the human brain, LLMs contain functional networks that frequently recur during operation. Further analysis shows that these functional networks are crucial for LLM performance. Masking key functional networks significantly impairs the model's performance, while retaining just a subset of these networks is adequate to maintain effective operation. This research provides novel insights into the interpretation of LLMs and the lightweighting of LLMs for certain downstream tasks. Code is available atthis https URL.

View on arXiv
@article{liu2025_2502.20408,
  title={ Brain-Inspired Exploration of Functional Networks and Key Neurons in Large Language Models },
  author={ Yiheng Liu and Xiaohui Gao and Haiyang Sun and Bao Ge and Tianming Liu and Junwei Han and Xintao Hu },
  journal={arXiv preprint arXiv:2502.20408},
  year={ 2025 }
}
Comments on this paper