ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.21337
46
1

A 71.2-μμμW Speech Recognition Accelerator with Recurrent Spiking Neural Network

27 March 2025
Chih-Chyau Yang
Tian-Sheuan Chang
ArXivPDFHTML
Abstract

This paper introduces a 71.2-μ\muμW speech recognition accelerator designed for edge devices' real-time applications, emphasizing an ultra low power design. Achieved through algorithm and hardware co-optimizations, we propose a compact recurrent spiking neural network with two recurrent layers, one fully connected layer, and a low time step (1 or 2). The 2.79-MB model undergoes pruning and 4-bit fixed-point quantization, shrinking it by 96.42\% to 0.1 MB. On the hardware front, we take advantage of \textit{mixed-level pruning}, \textit{zero-skipping} and \textit{merged spike} techniques, reducing complexity by 90.49\% to 13.86 MMAC/S. The \textit{parallel time-step execution} addresses inter-time-step data dependencies and enables weight buffer power savings through weight sharing. Capitalizing on the sparse spike activity, an input broadcasting scheme eliminates zero computations, further saving power. Implemented on the TSMC 28-nm process, the design operates in real time at 100 kHz, consuming 71.2 μ\muμW, surpassing state-of-the-art designs. At 500 MHz, it has 28.41 TOPS/W and 1903.11 GOPS/mm2^22 in energy and area efficiency, respectively.

View on arXiv
@article{yang2025_2503.21337,
  title={ A 71.2-$μ$W Speech Recognition Accelerator with Recurrent Spiking Neural Network },
  author={ Chih-Chyau Yang and Tian-Sheuan Chang },
  journal={arXiv preprint arXiv:2503.21337},
  year={ 2025 }
}
Comments on this paper