The recent success of Vision Transformers has generated significant interest in attention mechanisms and transformer architectures. Although existing methods have proposed spiking self-attention mechanisms compatible with spiking neural networks, they often face challenges in effective deployment on current neuromorphic platforms. This paper introduces a novel model that combines vision transformers with the Locally Competitive Algorithm (LCA) to facilitate efficient neuromorphic deployment. Our experiments show that ViT-LCA achieves higher accuracy on ImageNet-1K dataset while consuming significantly less energy than other spiking vision transformer counterparts. Furthermore, ViT-LCA's neuromorphic-friendly design allows for more direct mapping onto current neuromorphic architectures.
View on arXiv@article{takaghaj2025_2411.00140, title={ ViT-LCA: A Neuromorphic Approach for Vision Transformers }, author={ Sanaz Mahmoodi Takaghaj }, journal={arXiv preprint arXiv:2411.00140}, year={ 2025 } }