Recent advancements in large language models (LLMs) with billions of parameters have improved performance in various applications, but their inference processes demand significant energy and computational resources. In contrast, the human brain, with approximately 86 billion neurons, is much more energy-efficient than LLMs with similar parameters. Inspired by this, we redesign 770 billion parameter LLMs using bio-plausible spiking mechanisms, emulating the efficient behavior of the human brain. We propose the first spiking large language model, SpikeLLM. Coupled with the proposed model, two essential approaches are proposed to improve spike training efficiency: Generalized Integrate-and-Fire (GIF) neurons to compress spike length from to bits, and an Optimal Brain Spiking framework to divide outlier channels and allocate different for GIF neurons, which further compresses spike length to approximate bits. The necessity of spike-driven LLM is proved by comparison with quantized LLMs with similar operations. In the OmniQuant pipeline, SpikeLLM reduces 11.01% WikiText2 perplexity and improves 2.55% accuracy of common scene reasoning on a LLAMA-7B W4A4 model. In the GPTQ pipeline, SpikeLLM achieves direct additive in linear layers, significantly exceeding PB-LLMs.
View on arXiv@article{xing2025_2407.04752, title={ SpikeLLM: Scaling up Spiking Neural Network to Large Language Models via Saliency-based Spiking }, author={ Xingrun Xing and Boyan Gao and Zheng Zhang and David A. Clifton and Shitao Xiao and Li Du and Guoqi Li and Jiajun Zhang }, journal={arXiv preprint arXiv:2407.04752}, year={ 2025 } }