14
0

DriveSOTIF: Advancing Perception SOTIF Through Multimodal Large Language Models

Abstract

Human drivers naturally possess the ability to perceive driving scenarios, predict potential hazards, and react instinctively due to their spatial and causal intelligence, which allows them to perceive, understand, predict, and interact with the 3D world both spatially and temporally. Autonomous vehicles, however, lack these capabilities, leading to challenges in effectively managing perception-related Safety of the Intended Functionality (SOTIF) risks, particularly in complex and unpredictable driving conditions. To address this gap, we propose an approach that fine-tunes multimodal language models (MLLMs) on a customized dataset specifically designed to capture perception-related SOTIF scenarios. Model benchmarking demonstrates that this tailored dataset enables the models to better understand and respond to these complex driving situations. Additionally, in real-world case studies, the proposed method correctly handles challenging scenarios that even human drivers may find difficult. Real-time performance tests further indicate the potential for the models to operate efficiently in live driving environments. This approach, along with the dataset generation pipeline, shows significant promise for improving the identification, cognition, prediction, and reaction to SOTIF-related risks in autonomous driving systems. The dataset and information are available:this https URL

View on arXiv
@article{huang2025_2505.07084,
  title={ DriveSOTIF: Advancing Perception SOTIF Through Multimodal Large Language Models },
  author={ Shucheng Huang and Freda Shi and Chen Sun and Jiaming Zhong and Minghao Ning and Yufeng Yang and Yukun Lu and Hong Wang and Amir Khajepour },
  journal={arXiv preprint arXiv:2505.07084},
  year={ 2025 }
}
Comments on this paper