26
12

AirSLAM: An Efficient and Illumination-Robust Point-Line Visual SLAM System

Abstract

In this paper, we present an efficient visual SLAM system designed to tackle both short-term and long-term illumination challenges. Our system adopts a hybrid approach that combines deep learning techniques for feature detection and matching with traditional backend optimization methods. Specifically, we propose a unified convolutional neural network (CNN) that simultaneously extracts keypoints and structural lines. These features are then associated, matched, triangulated, and optimized in a coupled manner. Additionally, we introduce a lightweight relocalization pipeline that reuses the built map, where keypoints, lines, and a structure graph are used to match the query frame with the map. To enhance the applicability of the proposed system to real-world robots, we deploy and accelerate the feature detection and matching networks using C++ and NVIDIA TensorRT. Extensive experiments conducted on various datasets demonstrate that our system outperforms other state-of-the-art visual SLAM systems in illumination-challenging environments. Efficiency evaluations show that our system can run at a rate of 73Hz on a PC and 40Hz on an embedded platform. Our implementation is open-sourced:this https URL.

View on arXiv
@article{xu2025_2408.03520,
  title={ AirSLAM: An Efficient and Illumination-Robust Point-Line Visual SLAM System },
  author={ Kuan Xu and Yuefan Hao and Shenghai Yuan and Chen Wang and Lihua Xie },
  journal={arXiv preprint arXiv:2408.03520},
  year={ 2025 }
}
Comments on this paper