87
4

GeoGround: A Unified Large Vision-Language Model for Remote Sensing Visual Grounding

Abstract

Remote sensing (RS) visual grounding aims to use natural language expression to locate specific objects (in the form of the bounding box or segmentation mask) in RS images, enhancing human interaction with intelligent RS interpretation systems. Early research in this area was primarily based on horizontal bounding boxes (HBBs), but as more diverse RS datasets have become available, tasks involving oriented bounding boxes (OBBs) and segmentation masks have emerged. In practical applications, different targets require different grounding types: HBB can localize an object's position, OBB provides its orientation, and mask depicts its shape. However, existing specialized methods are typically tailored to a single type of RS visual grounding task and are hard to generalize across tasks. In contrast, large vision-language models (VLMs) exhibit powerful multi-task learning capabilities but struggle to handle dense prediction tasks like segmentation. This paper proposes GeoGround, a novel framework that unifies support for HBB, OBB, and mask RS visual grounding tasks, allowing flexible output selection. Rather than customizing the architecture of VLM, our work aims to elegantly support pixel-level visual grounding output through the Text-Mask technique. We define prompt-assisted and geometry-guided learning to enhance consistency across different signals. Experimental results show that GeoGround demonstrates strong performance across four RS visual grounding tasks, matching the performance of specialized methods on multiple benchmarks. Code available atthis https URL

View on arXiv
@article{zhou2025_2411.11904,
  title={ GeoGround: A Unified Large Vision-Language Model for Remote Sensing Visual Grounding },
  author={ Yue Zhou and Mengcheng Lan and Xiang Li and Litong Feng and Yiping Ke and Xue Jiang and Qingyun Li and Xue Yang and Wayne Zhang },
  journal={arXiv preprint arXiv:2411.11904},
  year={ 2025 }
}
Comments on this paper