ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.13191
31
1

On the Privacy Risks of Spiking Neural Networks: A Membership Inference Analysis

18 February 2025
Junyi Guan
Abhijith Sharma
Chong Tian
Salem Lahlou
    AAML
ArXivPDFHTML
Abstract

Spiking Neural Networks (SNNs) are increasingly explored for their energy efficiency and robustness in real-world applications, yet their privacy risks remain largely unexamined. In this work, we investigate the susceptibility of SNNs to Membership Inference Attacks (MIAs) -- a major privacy threat where an adversary attempts to determine whether a given sample was part of the training dataset. While prior work suggests that SNNs may offer inherent robustness due to their discrete, event-driven nature, we find that its resilience diminishes as latency (T) increases. Furthermore, we introduce an input dropout strategy under black box setting, that significantly enhances membership inference in SNNs. Our findings challenge the assumption that SNNs are inherently more secure, and even though they are expected to be better, our results reveal that SNNs exhibit privacy vulnerabilities that are equally comparable to Artificial Neural Networks (ANNs). Our code is available atthis https URL.

View on arXiv
@article{guan2025_2502.13191,
  title={ On the Privacy Risks of Spiking Neural Networks: A Membership Inference Analysis },
  author={ Junyi Guan and Abhijith Sharma and Chong Tian and Salem Lahlou },
  journal={arXiv preprint arXiv:2502.13191},
  year={ 2025 }
}
Comments on this paper