177
v1v2 (latest)

Brain-Inspired Exploration of Functional Networks and Key Neurons in Large Language Models

Xiaohui Gao
Ning Qiang
Bao Ge
Tianming Liu
Junwei Han
Xintao Hu
Main:7 Pages
18 Figures
Bibliography:3 Pages
13 Tables
Appendix:11 Pages
Abstract

In recent years, the rapid advancement of large language models (LLMs) in natural language processing has sparked significant interest among researchers to understand their mechanisms and functional characteristics. Although prior studies have attempted to explain LLM functionalities by identifying and interpreting specific neurons, these efforts mostly focus on individual neuron contributions, neglecting the fact that human brain functions are realized through intricate interaction networks. Inspired by research on functional brain networks (FBNs) in the field of neuroscience, we utilize similar methodologies estabilished in FBN analysis to explore the "functional networks" within LLMs in this study. Experimental results highlight that, much like the human brain, LLMs exhibit certain functional networks that recur frequently during their operation. Further investigation reveals that these functional networks are indispensable for LLM performance. Inhibiting key functional networks severely impairs the model's capabilities. Conversely, amplifying the activity of neurons within these networks can enhance either the model's overall performance or its performance on specific tasks. This suggests that these functional networks are strongly associated with either specific tasks or the overall performance of the LLM. Code is available atthis https URL.

View on arXiv
Comments on this paper