373
v1v2 (latest)

Evaluating Foundation Models' 3D Understanding Through Multi-View Correspondence Analysis

Valentina Lilova
Toyesh Chakravorty
Julian I. Bibo
Emma Boccaletti
Brandon Li
Lívia Baxová
Cees G. M. Snoek
Mohammadreza Salehi
Main:12 Pages
30 Figures
Bibliography:1 Pages
9 Tables
Appendix:13 Pages
Abstract

Benchmarking 3D spatial understanding of foundation models is essential for real-world applications such as robotics and autonomous driving. Existing evaluations often rely on downstream fine-tuning with linear heads or task-specific decoders, making it difficult to isolate the intrinsic 3D reasoning ability of pre-trained encoders. In this work, we introduce a novel benchmark for in-context 3D scene understanding that requires no fine-tuning and directly probes the quality of dense visual features. Building on the Hummingbird framework, which evaluates in-context 2D scene understanding, we extend the setup to the 3D Multi-View ImageNet (MVImgNet) dataset. Given a set of images depicting objects at specific camera angles (keys), we benchmark the performance of segmenting novel views (queries) and report the scores in 4 categories of easy, medium, hard, and extreme based on the key-query view contrast. We benchmark 7 state-of-the-art foundation models and show that DINO-based encoders remain competitive across large viewpoint shifts. Our code is publicly available atthis https URL.

View on arXiv
Comments on this paper