35

The Spatial Blindspot of Vision-Language Models

Nahid Alam
Leema Krishna Murali
Siddhant Bharadwaj
Patrick Liu
Timothy Chung
Drishti Sharma
Akshata A
Kranthi Kiran
Wesley Tam
Bala Krishna S Vegesna
Main:7 Pages
5 Figures
Bibliography:3 Pages
6 Tables
Abstract

Vision-language models (VLMs) have advanced rapidly, but their ability to capture spatial relationships remains a blindspot. Current VLMs are typically built with contrastive language-image pretraining (CLIP) style image encoders. The training recipe often flattens images into 1D patch sequences, discarding the 2D structure necessary for spatial reasoning. We argue that this lack of spatial awareness is a missing dimension in VLM design and a bottleneck for applications requiring spatial grounding, such as robotics and embodied AI. To address this, we investigate (i) image encoders trained with alternative objectives and (ii) 2D positional encodings. Our experiments show that these architectural choices can lead to improved spatial reasoning on several benchmarks.

View on arXiv
Comments on this paper