117
v1v2 (latest)

SATGround: A Spatially-Aware Approach for Visual Grounding in Remote Sensing

Aysim Toker
Andreea-Maria Oncescu
Roy Miles
Ismail Elezi
Jiankang Deng
Main:14 Pages
4 Figures
Bibliography:5 Pages
5 Tables
Abstract

Vision-language models (VLMs) are emerging as powerful generalist tools for remote sensing, capable of integrating information across diverse tasks and enabling flexible, instruction-based interactions via a chat interface. In this work, we enhance VLM-based visual grounding in satellite imagery by proposing a novel structured localization mechanism. Our approach involves finetuning a pretrained VLM on a diverse set of instruction-following tasks, while interfacing a dedicated grounding module through specialized control tokens for localization. This method facilitates joint reasoning over both language and spatial information, significantly enhancing the model's ability to precisely localize objects in complex satellite scenes. We evaluate our framework on several remote sensing benchmarks, consistently improving the state-of-the-art, including a 33.2% relative improvement over previous methods on visual grounding. Our results highlight the benefits of integrating structured spatial reasoning into VLMs, paving the way for more reliable real-world satellite data analysis. Code will be released upon acceptance.

View on arXiv
Comments on this paper