ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.02249
32
0

Spiking Neural Network as Adaptive Event Stream Slicer

3 October 2024
Jiahang Cao
Mingyuan Sun
Ziqing Wang
Hao Cheng
Qiang Zhang
Shibo Zhou
Renjing Xu
ArXivPDFHTML
Abstract

Event-based cameras are attracting significant interest as they provide rich edge information, high dynamic range, and high temporal resolution. Many state-of-the-art event-based algorithms rely on splitting the events into fixed groups, resulting in the omission of crucial temporal information, particularly when dealing with diverse motion scenarios (\eg, high/low speed).In this work, we propose SpikeSlicer, a novel-designed plug-and-play event processing method capable of splitting events streamthis http URLutilizes a low-energy spiking neural network (SNN) to trigger event slicing. To guide the SNN to fire spikes at optimal time steps, we propose the Spiking Position-aware Loss (SPA-Loss) to modulate the neuron's state. Additionally, we develop a Feedback-Update training strategy that refines the slicing decisions using feedback from the downstream artificial neural network (ANN). Extensive experiments demonstrate that our method yields significant performance improvements in event-based object tracking and recognition. Notably, SpikeSlicer provides a brand-new SNN-ANN cooperation paradigm, where the SNN acts as an efficient, low-energy data processor to assist the ANN in improving downstream performance, injecting new perspectives and potential avenues of exploration. Our code is available atthis https URL.

View on arXiv
@article{cao2025_2410.02249,
  title={ Spiking Neural Network as Adaptive Event Stream Slicer },
  author={ Jiahang Cao and Mingyuan Sun and Ziqing Wang and Hao Cheng and Qiang Zhang and Shibo Zhou and Renjing Xu },
  journal={arXiv preprint arXiv:2410.02249},
  year={ 2025 }
}
Comments on this paper