43
0

RLOMM: An Efficient and Robust Online Map Matching Framework with Reinforcement Learning

Abstract

Online map matching is a fundamental problem in location-based services, aiming to incrementally match trajectory data step-by-step onto a road network. However, existing methods fail to meet the needs for efficiency, robustness, and accuracy required by large-scale online applications, making this task still challenging. This paper introduces a novel framework that achieves high accuracy and efficient matching while ensuring robustness in handling diverse scenarios. To improve efficiency, we begin by modeling the online map matching problem as an Online Markov Decision Process (OMDP) based on its inherent characteristics. This approach helps efficiently merge historical and real-time data, reducing unnecessary calculations. Next, to enhance robustness, we design a reinforcement learning method, enabling robust handling of real-time data from dynamically changing environments. In particular, we propose a novel model learning process and a comprehensive reward function, allowing the model to make reasonable current matches from a future-oriented perspective, and to continuously update and optimize during the decision-making process based on feedback. Lastly, to address the heterogeneity between trajectories and roads, we design distinct graph structures, facilitating efficient representation learning through graph and recurrent neural networks. To further align trajectory and road data, we introduce contrastive learning to decrease their distance in the latent space, thereby promoting effective integration of the two. Extensive evaluations on three real-world datasets confirm that our method significantly outperforms existing state-of-the-art solutions in terms of accuracy, efficiency and robustness.

View on arXiv
@article{chen2025_2502.06825,
  title={ RLOMM: An Efficient and Robust Online Map Matching Framework with Reinforcement Learning },
  author={ Minxiao Chen and Haitao Yuan and Nan Jiang and Zhihan Zheng and Sai Wu and Ao Zhou and Shangguang Wang },
  journal={arXiv preprint arXiv:2502.06825},
  year={ 2025 }
}
Comments on this paper