ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.02725
59
0

A Joint Visual Compression and Perception Framework for Neuralmorphic Spiking Camera

4 March 2025
Kexiang Feng
Chuanmin Jia
Siwei Ma
W. Gao
ArXivPDFHTML
Abstract

The advent of neuralmorphic spike cameras has garnered significant attention for their ability to capture continuous motion with unparalleled temporalthis http URL, this imaging attribute necessitates considerable resources for binary spike data storage andthis http URLlight of compression and spike-driven intelligent applications, we present the notion of Spike Coding for Intelligence (SCI), wherein spike sequences are compressed and optimized for both bit-rate and taskthis http URLinspiration from the mammalian vision system, we propose a dual-pathway architecture for separate processing of spatial semantics and motion information, which is then merged to produce features for compression.A refinement scheme is also introduced to ensure consistency between decoded features and motionthis http URLfurther propose a temporal regression approach that integrates various motion dynamics, capitalizing on the advancements in warping and deformationthis http URLexperiments demonstrate our scheme achieves state-of-the-art (SOTA) performance for spike compression andthis http URLachieve an average 17.25% BD-rate reduction compared to SOTA codecs and a 4.3% accuracy improvement over SpiReco for spike-based classification, with 88.26% complexity reduction and 42.41% inference time saving on the encoding side.

View on arXiv
@article{feng2025_2503.02725,
  title={ A Joint Visual Compression and Perception Framework for Neuralmorphic Spiking Camera },
  author={ Kexiang Feng and Chuanmin Jia and Siwei Ma and Wen Gao },
  journal={arXiv preprint arXiv:2503.02725},
  year={ 2025 }
}
Comments on this paper