ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.14344
51
0

Towards Accurate Binary Spiking Neural Networks: Learning with Adaptive Gradient Modulation Mechanism

21 February 2025
Yu Liang
Wenjie Wei
A. Belatreche
Honglin Cao
Zijian Zhou
Shuai Wang
Malu Zhang
Y. Yang
    MQ
ArXivPDFHTML
Abstract

Binary Spiking Neural Networks (BSNNs) inherit the eventdriven paradigm of SNNs, while also adopting the reduced storage burden of binarization techniques. These distinct advantages grant BSNNs lightweight and energy-efficient characteristics, rendering them ideal for deployment on resource-constrained edge devices. However, due to the binary synaptic weights and non-differentiable spike function, effectively training BSNNs remains an open question. In this paper, we conduct an in-depth analysis of the challenge for BSNN learning, namely the frequent weight sign flipping problem. To mitigate this issue, we propose an Adaptive Gradient Modulation Mechanism (AGMM), which is designed to reduce the frequency of weight sign flipping by adaptively adjusting the gradients during the learning process. The proposed AGMM can enable BSNNs to achieve faster convergence speed and higher accuracy, effectively narrowing the gap between BSNNs and their full-precision equivalents. We validate AGMM on both static and neuromorphic datasets, and results indicate that it achieves state-of-the-art results among BSNNs. This work substantially reduces storage demands and enhances SNNs' inherent energy efficiency, making them highly feasible for resource-constrained environments.

View on arXiv
@article{liang2025_2502.14344,
  title={ Towards Accurate Binary Spiking Neural Networks: Learning with Adaptive Gradient Modulation Mechanism },
  author={ Yu Liang and Wenjie Wei and Ammar Belatreche and Honglin Cao and Zijian Zhou and Shuai Wang and Malu Zhang and Yang Yang },
  journal={arXiv preprint arXiv:2502.14344},
  year={ 2025 }
}
Comments on this paper