ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.17049
23
0

Fast Switching Serial and Parallel Paradigms of SNN Inference on Multi-core Heterogeneous Neuromorphic Platform SpiNNaker2

24 June 2024
Jiaxin Huang
Bernhard Vogginger
Florian Kelber
Hector A. Gonzalez
Klaus Knobloch
Christian Mayr
ArXivPDFHTML
Abstract

With serial and parallel processors introduced into Spiking Neural Networks (SNNs) execution, more and more researchers are dedicated to improving the performance of the computing paradigms by taking full advantage of the strengths of the available processor. In this paper, we compare and integrate serial and parallel paradigms into one SNN compiling system. For a faster switching between them in the layer granularity, we train the classifier to prejudge a better paradigm before compiling instead of making the decision afterward, saving a great amount of compiling time and RAM space on the host PC. The classifier Adaptive Boost, with the highest accuracy (91.69%) among 12 classifiers, is integrated into the switching system, which utilizes less memory and processors on the multi-core neuromorphic hardware backend SpiNNaker2 than two individual paradigms. To the best of our knowledge, it is the first fast-switching compiling system for SNN simulation.

View on arXiv
Comments on this paper