246
v1v2 (latest)

vSTMD: Visual Motion Detection for Extremely Tiny Target at Various Velocities

Main:12 Pages
8 Figures
Bibliography:3 Pages
Abstract

Visual motion detection for extremely tiny (ET-) targets is challenging, due to their category-independent nature and the scarcity of visual cues, which often incapacitate mainstream feature-based models. Natural architectures with rich interpretability offer a promising alternative, where STMD architectures derived from insect visual STMD (Small Target Motion Detector) pathways have demonstrated their effectiveness. However, previous STMD models are constrained to a narrow velocity range, hindering their efficacy in real-world scenarios where targets exhibit diverse and unstable dynamics. To address this limitation, we present vSTMD, a learning-free model for motion detection of ET-targets at various velocities. Our key innovations include: (1) a cross-Inhibition Dynamic Potential (cIDP) that serves as a self-adaptive mechanism efficiently capturing motion cues across a wide velocity spectrum, and (2) the first Collaborative Directional Gradient Calculation (CDGC) strategy, which enhances orienting accuracy and robustness while reducing computational overhead to one-eighth of previously isolated strategies. Evaluated on the real-world dataset RIST, the proposed vSTMD and its feedback-facilitated variant vSTMD-F achieve relative F1F_{1} gains of 30%30\% and 58%58\% over state-of-the-art (SOTA) STMD approaches, respectively. Furthermore, both models demonstrate competitive orientation estimation performance compared to SOTA deep learning-driven methods. Experiments also reveal the superiority of the natural architecture for ET-object motion detection - vSTMD is 60×60\times faster than contemporary data-driven methods, making it highly suitable for real-time applications in dynamic scenarios and complex backgrounds. Code is available atthis https URL.

View on arXiv
Comments on this paper