61
0

VADMamba: Exploring State Space Models for Fast Video Anomaly Detection

Abstract

Video anomaly detection (VAD) methods are mostly CNN-based or Transformer-based, achieving impressive results, but the focus on detection accuracy often comes at the expense of inference speed. The emergence of state space models in computer vision, exemplified by the Mamba model, demonstrates improved computational efficiency through selective scans and showcases the great potential for long-range modeling. Our study pioneers the application of Mamba to VAD, dubbed VADMamba, which is based on multi-task learning for frame prediction and optical flow reconstruction. Specifically, we propose the VQ-Mamba Unet (VQ-MaU) framework, which incorporates a Vector Quantization (VQ) layer and Mamba-based Non-negative Visual State Space (NVSS) block. Furthermore, two individual VQ-MaU networks separately predict frames and reconstruct corresponding optical flows, further boosting accuracy through a clip-level fusion evaluation strategy. Experimental results validate the efficacy of the proposed VADMamba across three benchmark datasets, demonstrating superior performance in inference speed compared to previous work. Code is available atthis https URL.

View on arXiv
@article{lyu2025_2503.21169,
  title={ VADMamba: Exploring State Space Models for Fast Video Anomaly Detection },
  author={ Jiahao Lyu and Minghua Zhao and Jing Hu and Xuewen Huang and Yifei Chen and Shuangli Du },
  journal={arXiv preprint arXiv:2503.21169},
  year={ 2025 }
}
Comments on this paper