423
v1v2 (latest)

ViT-LCA: A Neuromorphic Approach for Vision Transformers

International Conference on Artificial Intelligence Circuits and Systems (ICAICS), 2024
Sanaz Mahmoodi Takaghaj
Main:4 Pages
3 Figures
Bibliography:1 Pages
2 Tables
Abstract

The recent success of Vision Transformers has generated significant interest in attention mechanisms and transformer architectures. Although existing methods have proposed spiking self-attention mechanisms compatible with spiking neural networks, they often face challenges in effective deployment on current neuromorphic platforms. This paper introduces a novel model that combines vision transformers with the Locally Competitive Algorithm (LCA) to facilitate efficient neuromorphic deployment. Our experiments show that ViT-LCA achieves higher accuracy on ImageNet-1K dataset while consuming significantly less energy than other spiking vision transformer counterparts. Furthermore, ViT-LCA's neuromorphic-friendly design allows for more direct mapping onto current neuromorphic architectures.

View on arXiv
Comments on this paper