ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.13440
12
0

Sparse Convolutional Recurrent Learning for Efficient Event-based Neuromorphic Object Detection

16 June 2025
Shenqi Wang
Yingfu Xu
Amirreza Yousefzadeh
S. Eissa
Henk Corporaal
Federico Corradi
Guangzhi Tang
ArXiv (abs)PDFHTML
Main:7 Pages
3 Figures
Bibliography:1 Pages
Abstract

Leveraging the high temporal resolution and dynamic range, object detection with event cameras can enhance the performance and safety of automotive and robotics applications in real-world scenarios. However, processing sparse event data requires compute-intensive convolutional recurrent units, complicating their integration into resource-constrained edge applications. Here, we propose the Sparse Event-based Efficient Detector (SEED) for efficient event-based object detection on neuromorphic processors. We introduce sparse convolutional recurrent learning, which achieves over 92% activation sparsity in recurrent processing, vastly reducing the cost for spatiotemporal reasoning on sparse event data. We validated our method on Prophesee's 1 Mpx and Gen1 event-based object detection datasets. Notably, SEED sets a new benchmark in computational efficiency for event-based object detection which requires long-term temporal learning. Compared to state-of-the-art methods, SEED significantly reduces synaptic operations while delivering higher or same-level mAP. Our hardware simulations showcase the critical role of SEED's hardware-aware design in achieving energy-efficient and low-latency neuromorphic processing.

View on arXiv
@article{wang2025_2506.13440,
  title={ Sparse Convolutional Recurrent Learning for Efficient Event-based Neuromorphic Object Detection },
  author={ Shenqi Wang and Yingfu Xu and Amirreza Yousefzadeh and Sherif Eissa and Henk Corporaal and Federico Corradi and Guangzhi Tang },
  journal={arXiv preprint arXiv:2506.13440},
  year={ 2025 }
}
Comments on this paper