50
1

Benchmarking Multi-modal Semantic Segmentation under Sensor Failures: Missing and Noisy Modality Robustness

Abstract

Multi-modal semantic segmentation (MMSS) addresses the limitations of single-modality data by integrating complementary information across modalities. Despite notable progress, a significant gap persists between research and real-world deployment due to variability and uncertainty in multi-modal data quality. Robustness has thus become essential for practical MMSS applications. However, the absence of standardized benchmarks for evaluating robustness hinders further advancement. To address this, we first survey existing MMSS literature and categorize representative methods to provide a structured overview. We then introduce a robustness benchmark that evaluates MMSS models under three scenarios: Entire-Missing Modality (EMM), Random-Missing Modality (RMM), and Noisy Modality (NM). From a probabilistic standpoint, we model modality failure under two conditions: (1) all damaged combinations are equally probable; (2) each modality fails independently following a Bernoulli distribution. Based on these, we propose four metrics-mIoUEMMAvgmIoU^{Avg}_{EMM}, mIoUEMMEmIoU^{E}_{EMM}, mIoURMMAvgmIoU^{Avg}_{RMM}, and mIoURMMEmIoU^{E}_{RMM}-to assess model robustness under EMM and RMM. This work provides the first dedicated benchmark for MMSS robustness, offering new insights and tools to advance the field. Source code is available atthis https URL.

View on arXiv
@article{liao2025_2503.18445,
  title={ Benchmarking Multi-modal Semantic Segmentation under Sensor Failures: Missing and Noisy Modality Robustness },
  author={ Chenfei Liao and Kaiyu Lei and Xu Zheng and Junha Moon and Zhixiong Wang and Yixuan Wang and Danda Pani Paudel and Luc Van Gool and Xuming Hu },
  journal={arXiv preprint arXiv:2503.18445},
  year={ 2025 }
}
Comments on this paper