Spiking Transformer:Introducing Accurate Addition-Only Spiking Self-Attention for Transformer

Transformers have demonstrated outstanding performance across a wide range of tasks, owing to their self-attention mechanism, but they are highly energy-consuming. Spiking Neural Networks have emerged as a promising energy-efficient alternative to traditional Artificial Neural Networks, leveraging event-driven computation and binary spikes for information transfer. The combination of Transformers' capabilities with the energy efficiency of SNNs offers a compelling opportunity. This paper addresses the challenge of adapting the self-attention mechanism of Transformers to the spiking paradigm by introducing a novel approach: Accurate Addition-Only Spiking Self-Attention (AOSA). Unlike existing methods that rely solely on binary spiking neurons for all components of the self-attention mechanism, our approach integrates binary, ReLU, and ternary spiking neurons. This hybrid strategy significantly improves accuracy while preserving non-multiplicative computations. Moreover, our method eliminates the need for softmax and scaling operations. Extensive experiments show that the AOSA-based Spiking Transformer outperforms existing SNN-based Transformers on several datasets, even achieving an accuracy of 78.66\% on ImageNet-1K. Our work represents a significant advancement in SNN-based Transformer models, offering a more accurate and efficient solution for real-world applications.
View on arXiv@article{guo2025_2503.00226, title={ Spiking Transformer:Introducing Accurate Addition-Only Spiking Self-Attention for Transformer }, author={ Yufei Guo and Xiaode Liu and Yuanpei Chen and Weihang Peng and Yuhan Zhang and Zhe Ma }, journal={arXiv preprint arXiv:2503.00226}, year={ 2025 } }