ParaRevSNN: A Parallel Reversible Spiking Neural Network for Efficient Training and Inference

Reversible Spiking Neural Networks (RevSNNs) enable memory-efficient training by reconstructing forward activations during backpropagation, but suffer from high latency due to strictly sequential computation. To overcome this limitation, we propose ParaRevSNN, a parallel reversible SNN architecture that decouples sequential dependencies between reversible blocks while preserving reversibility. This design enables inter-block parallelism, significantly accelerating training and inference while retaining the memory-saving benefits of reversibility. Experiments on CIFAR10, CIFAR100, CIFAR10-DVS, and DVS128 Gesture demonstrate that ParaRevSNN matches or exceeds the accuracy of standard RevSNNs, while reducing training time by up to 35.2\% and inference time to 18.15\%, making it well-suited for deployment in resource-constrained scenarios.
View on arXiv