MLO: Multi-Object Tracking and Lidar Odometry in Dynamic Environment
The SLAM system based on static scene assumptions introduce significant estimation errors when there are many moving objects in the field of view. Tracking and maintaining semantic objects is beneficial for scene understanding and providing rich decision-making information for planning and control modules. This paper introduces MLO, a multi-object lidar odometry that tracks ego-motion and semantic objects using only a lidar sensor. To achieve accurate and robust tracking of multiple objects, we propose a least-squares estimator that fuses 3D bounding boxes and geometric point clouds for object state updating. By analyzing the object motion states in the tracking list, the mapping module uses static objects and environmental features to eliminate accumulated errors. At the same time, it provides continuous object trajectories in map coordinate. Our method is qualitatively and quantitatively evaluated in different scenarios under the public KITTI dataset. The experiment results show that the ego localization accuracy of MLO is better than the state-of-the-art systems in highly dynamic, unstructured, and unknown semantic scenes. Meanwhile, the multi-target tracking method with semantic-geometry fusion also has apparent advantages in tracking accuracy and consistency compared with the filtering-based method.
View on arXiv