18
2

AccidentBlip: Agent of Accident Warning based on MA-former

Abstract

In complex transportation systems, accurately sensing the surrounding environment and predicting the risk of potential accidents is crucial. Most existing accident prediction methods are based on temporal neural networks, such as RNN and LSTM. Recent multimodal fusion approaches improve vehicle localization through 3D target detection and assess potential risks by calculating inter-vehicle distances. However, these temporal networks and multimodal fusion methods suffer from limited detection robustness and high economic costs. To address these challenges, we propose AccidentBlip, a vision-only framework that employs our self-designed Motion Accident Transformer (MA-former) to process each frame of video. Unlike conventional self-attention mechanisms, MA-former replaces Q-former's self-attention with temporal attention, allowing the query corresponding to the previous frame to generate the query input for the next frame. Additionally, we introduce a residual module connection between queries of consecutive frames to enhance the model's temporal processing capabilities. For complex V2V and V2X scenarios, AccidentBlip adapts by concatenating queries from multiple cameras, effectively capturing spatial and temporal relationships. In particular, AccidentBlip achieves SOTA performance in both accident detection and prediction tasks on the DeepAccident dataset. It also outperforms current SOTA methods in V2V and V2X scenarios, demonstrating a superior capability to understand complex real-world environments.

View on arXiv
Comments on this paper