ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.11066
12
0

A Multi-modal Fusion Network for Terrain Perception Based on Illumination Aware

16 May 2025
Rui Wang
Shichun Yang
Yuyi Chen
Z. Li
Zexiang Tong
J. Xu
Jiayi Lu
Xinjie Feng
Yaoguang Cao
ArXivPDFHTML
Abstract

Road terrains play a crucial role in ensuring the driving safety of autonomous vehicles (AVs). However, existing sensors of AVs, including cameras and Lidars, are susceptible to variations in lighting and weather conditions, making it challenging to achieve real-time perception of road conditions. In this paper, we propose an illumination-aware multi-modal fusion network (IMF), which leverages both exteroceptive and proprioceptive perception and optimizes the fusion process based on illumination features. We introduce an illumination-perception sub-network to accurately estimate illumination features. Moreover, we design a multi-modal fusion network which is able to dynamically adjust weights of different modalities according to illumination features. We enhance the optimization process by pre-training of the illumination-perception sub-network and incorporating illumination loss as one of the training constraints. Extensive experiments demonstrate that the IMF shows a superior performance compared to state-of-the-art methods. The comparison results with single modality perception methods highlight the comprehensive advantages of multi-modal fusion in accurately perceiving road terrains under varying lighting conditions. Our dataset is available at:this https URL.

View on arXiv
@article{wang2025_2505.11066,
  title={ A Multi-modal Fusion Network for Terrain Perception Based on Illumination Aware },
  author={ Rui Wang and Shichun Yang and Yuyi Chen and Zhuoyang Li and Zexiang Tong and Jianyi Xu and Jiayi Lu and Xinjie Feng and Yaoguang Cao },
  journal={arXiv preprint arXiv:2505.11066},
  year={ 2025 }
}
Comments on this paper