22
0

RGB-Event Fusion with Self-Attention for Collision Prediction

Abstract

Ensuring robust and real-time obstacle avoidance is critical for the safe operation of autonomous robots in dynamic, real-world environments. This paper proposes a neural network framework for predicting the time and collision position of an unmanned aerial vehicle with a dynamic object, using RGB and event-based vision sensors. The proposed architecture consists of two separate encoder branches, one for each modality, followed by fusion by self-attention to improve prediction accuracy. To facilitate benchmarking, we leverage the ABCD [8] dataset collected that enables detailed comparisons of single-modality and fusion-based approaches. At the same prediction throughput of 50Hz, the experimental results show that the fusion-based model offers an improvement in prediction accuracy over single-modality approaches of 1% on average and 10% for distances beyond 0.5m, but comes at the cost of +71% in memory and + 105% in FLOPs. Notably, the event-based model outperforms the RGB model by 4% for position and 26% for time error at a similar computational cost, making it a competitive alternative. Additionally, we evaluate quantized versions of the event-based models, applying 1- to 8-bit quantization to assess the trade-offs between predictive performance and computational efficiency. These findings highlight the trade-offs of multi-modal perception using RGB and event-based cameras in robotic applications.

View on arXiv
@article{bonazzi2025_2505.04258,
  title={ RGB-Event Fusion with Self-Attention for Collision Prediction },
  author={ Pietro Bonazzi and Christian Vogt and Michael Jost and Haotong Qin and Lyes Khacef and Federico Paredes-Valles and Michele Magno },
  journal={arXiv preprint arXiv:2505.04258},
  year={ 2025 }
}
Comments on this paper