ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.04752
35
5

SpikeLLM: Scaling up Spiking Neural Network to Large Language Models via Saliency-based Spiking

5 July 2024
Xingrun Xing
Boyan Gao
Zheng Zhang
David A. Clifton
Shitao Xiao
LI DU
Guoqi Li
Jiajun Zhang
ArXivPDFHTML
Abstract

Recent advancements in large language models (LLMs) with billions of parameters have improved performance in various applications, but their inference processes demand significant energy and computational resources. In contrast, the human brain, with approximately 86 billion neurons, is much more energy-efficient than LLMs with similar parameters. Inspired by this, we redesign 7∼\sim∼70 billion parameter LLMs using bio-plausible spiking mechanisms, emulating the efficient behavior of the human brain. We propose the first spiking large language model, SpikeLLM. Coupled with the proposed model, two essential approaches are proposed to improve spike training efficiency: Generalized Integrate-and-Fire (GIF) neurons to compress spike length from TTT to TLlog⁡2L\frac{T}{L} \log_2 LLT​log2​L bits, and an Optimal Brain Spiking framework to divide outlier channels and allocate different TTT for GIF neurons, which further compresses spike length to approximate log2Tlog_2Tlog2​T bits. The necessity of spike-driven LLM is proved by comparison with quantized LLMs with similar operations. In the OmniQuant pipeline, SpikeLLM reduces 11.01% WikiText2 perplexity and improves 2.55% accuracy of common scene reasoning on a LLAMA-7B W4A4 model. In the GPTQ pipeline, SpikeLLM achieves direct additive in linear layers, significantly exceeding PB-LLMs.

View on arXiv
@article{xing2025_2407.04752,
  title={ SpikeLLM: Scaling up Spiking Neural Network to Large Language Models via Saliency-based Spiking },
  author={ Xingrun Xing and Boyan Gao and Zheng Zhang and David A. Clifton and Shitao Xiao and Li Du and Guoqi Li and Jiajun Zhang },
  journal={arXiv preprint arXiv:2407.04752},
  year={ 2025 }
}
Comments on this paper