22
0

Adaptively Pruned Spiking Neural Networks for Energy-Efficient Intracortical Neural Decoding

Abstract

Intracortical brain-machine interfaces demand low-latency, energy-efficient solutions for neural decoding. Spiking Neural Networks (SNNs) deployed on neuromorphic hardware have demonstrated remarkable efficiency in neural decoding by leveraging sparse binary activations and efficient spatiotemporal processing. However, reducing the computational cost of SNNs remains a critical challenge for developing ultra-efficient intracortical neural implants. In this work, we introduce a novel adaptive pruning algorithm specifically designed for SNNs with high activation sparsity, targeting intracortical neural decoding. Our method dynamically adjusts pruning decisions and employs a rollback mechanism to selectively eliminate redundant synaptic connections without compromising decoding accuracy. Experimental evaluation on the NeuroBench Non-Human Primate (NHP) Motor Prediction benchmark shows that our pruned network achieves performance comparable to dense networks, with a maximum tenfold improvement in efficiency. Moreover, hardware simulation on the neuromorphic processor reveals that the pruned network operates at sub-μ\muW power levels, underscoring its potential for energy-constrained neural implants. These results underscore the promise of our approach for advancing energy-efficient intracortical brain-machine interfaces with low-overhead on-device intelligence.

View on arXiv
@article{rivelli2025_2504.11568,
  title={ Adaptively Pruned Spiking Neural Networks for Energy-Efficient Intracortical Neural Decoding },
  author={ Francesca Rivelli and Martin Popov and Charalampos S. Kouzinopoulos and Guangzhi Tang },
  journal={arXiv preprint arXiv:2504.11568},
  year={ 2025 }
}
Comments on this paper