451
v1v2v3v4 (latest)

On the Privacy Risks of Spiking Neural Networks: A Membership Inference Analysis

Conference on Uncertainty in Artificial Intelligence (UAI), 2025
Main:8 Pages
4 Figures
Bibliography:4 Pages
7 Tables
Appendix:2 Pages
Abstract

Spiking Neural Networks (SNNs) are increasingly explored for their energy efficiency and robustness in real-world applications, yet their privacy risks remain largely unexamined. In this work, we investigate the susceptibility of SNNs to Membership Inference Attacks (MIAs) -- a major privacy threat where an adversary attempts to determine whether a given sample was part of the training dataset. While prior work suggests that SNNs may offer inherent robustness due to their discrete, event-driven nature, we find that its resilience diminishes as latency (T) increases. Furthermore, we introduce an input dropout strategy under black box setting, that significantly enhances membership inference in SNNs. Our findings challenge the assumption that SNNs are inherently more secure, and even though they are expected to be better, our results reveal that SNNs exhibit privacy vulnerabilities that are equally comparable to Artificial Neural Networks (ANNs). Our code is available atthis https URL.

View on arXiv
Comments on this paper