14
0

Input-Specific and Universal Adversarial Attack Generation for Spiking Neural Networks in the Spiking Domain

Abstract

As Spiking Neural Networks (SNNs) gain traction across various applications, understanding their security vulnerabilities becomes increasingly important. In this work, we focus on the adversarial attacks, which is perhaps the most concerning threat. An adversarial attack aims at finding a subtle input perturbation to fool the network's decision-making. We propose two novel adversarial attack algorithms for SNNs: an input-specific attack that crafts adversarial samples from specific dataset inputs and a universal attack that generates a reusable patch capable of inducing misclassification across most inputs, thus offering practical feasibility for real-time deployment. The algorithms are gradient-based operating in the spiking domain proving to be effective across different evaluation metrics, such as adversarial accuracy, stealthiness, and generation time. Experimental results on two widely used neuromorphic vision datasets, NMNIST and IBM DVS Gesture, show that our proposed attacks surpass in all metrics all existing state-of-the-art methods. Additionally, we present the first demonstration of adversarial attack generation in the sound domain using the SHD dataset.

View on arXiv
@article{raptis2025_2505.06299,
  title={ Input-Specific and Universal Adversarial Attack Generation for Spiking Neural Networks in the Spiking Domain },
  author={ Spyridon Raptis and Haralampos-G. Stratigopoulos },
  journal={arXiv preprint arXiv:2505.06299},
  year={ 2025 }
}
Comments on this paper