EffoVPR: Effective Foundation Model Utilization for Visual Place Recognition

The task of Visual Place Recognition (VPR) is to predict the location of a query image from a database of geo-tagged images. Recent studies in VPR have highlighted the significant advantage of employing pre-trained foundation models like DINOv2 for the VPR task. However, these models are often deemed inadequate for VPR without further fine-tuning on VPR-specific data. In this paper, we present an effective approach to harness the potential of a foundation model for VPR. We show that features extracted from self-attention layers can act as a powerful re-ranker for VPR, even in a zero-shot setting. Our method not only outperforms previous zero-shot approaches but also introduces results competitive with several supervised methods. We then show that a single-stage approach utilizing internal ViT layers for pooling can produce global features that achieve state-of-the-art performance, with impressive feature compactness down to 128D. Moreover, integrating our local foundation features for re-ranking further widens this performance gap. Our method also demonstrates exceptional robustness and generalization, setting new state-of-the-art performance, while handling challenging conditions such as occlusion, day-night transitions, and seasonal variations.
View on arXiv@article{tzachor2025_2405.18065, title={ EffoVPR: Effective Foundation Model Utilization for Visual Place Recognition }, author={ Issar Tzachor and Boaz Lerner and Matan Levy and Michael Green and Tal Berkovitz Shalev and Gavriel Habib and Dvir Samuel and Noam Korngut Zailer and Or Shimshi and Nir Darshan and Rami Ben-Ari }, journal={arXiv preprint arXiv:2405.18065}, year={ 2025 } }