6
0

MMS-VPR: Multimodal Street-Level Visual Place Recognition Dataset and Benchmark

Abstract

Existing visual place recognition (VPR) datasets predominantly rely on vehicle-mounted imagery, lack multimodal diversity and underrepresent dense, mixed-use street-level spaces, especially in non-Western urban contexts. To address these gaps, we introduce MMS-VPR, a large-scale multimodal dataset for street-level place recognition in complex, pedestrian-only environments. The dataset comprises 78,575 annotated images and 2,512 video clips captured across 207 locations in a ~70,800 m2\mathrm{m}^2 open-air commercial district in Chengdu, China. Each image is labeled with precise GPS coordinates, timestamp, and textual metadata, and covers varied lighting conditions, viewpoints, and timeframes. MMS-VPR follows a systematic and replicable data collection protocol with minimal device requirements, lowering the barrier for scalable dataset creation. Importantly, the dataset forms an inherent spatial graph with 125 edges, 81 nodes, and 1 subgraph, enabling structure-aware place recognition. We further define two application-specific subsets -- Dataset_Edges and Dataset_Points -- to support fine-grained and graph-based evaluation tasks. Extensive benchmarks using conventional VPR models, graph neural networks, and multimodal baselines show substantial improvements when leveraging multimodal and structural cues. MMS-VPR facilitates future research at the intersection of computer vision, geospatial understanding, and multimodal reasoning. The dataset is publicly available atthis https URL.

View on arXiv
@article{ou2025_2505.12254,
  title={ MMS-VPR: Multimodal Street-Level Visual Place Recognition Dataset and Benchmark },
  author={ Yiwei Ou and Xiaobin Ren and Ronggui Sun and Guansong Gao and Ziyi Jiang and Kaiqi Zhao and Manfredo Manfredini },
  journal={arXiv preprint arXiv:2505.12254},
  year={ 2025 }
}
Comments on this paper