ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.00140
28
0

ViT-LCA: A Neuromorphic Approach for Vision Transformers

31 October 2024
Sanaz Mahmoodi Takaghaj
    ViT
ArXivPDFHTML
Abstract

The recent success of Vision Transformers has generated significant interest in attention mechanisms and transformer architectures. Although existing methods have proposed spiking self-attention mechanisms compatible with spiking neural networks, they often face challenges in effective deployment on current neuromorphic platforms. This paper introduces a novel model that combines vision transformers with the Locally Competitive Algorithm (LCA) to facilitate efficient neuromorphic deployment. Our experiments show that ViT-LCA achieves higher accuracy on ImageNet-1K dataset while consuming significantly less energy than other spiking vision transformer counterparts. Furthermore, ViT-LCA's neuromorphic-friendly design allows for more direct mapping onto current neuromorphic architectures.

View on arXiv
@article{takaghaj2025_2411.00140,
  title={ ViT-LCA: A Neuromorphic Approach for Vision Transformers },
  author={ Sanaz Mahmoodi Takaghaj },
  journal={arXiv preprint arXiv:2411.00140},
  year={ 2025 }
}
Comments on this paper