36
0

OpenCity3D: What do Vision-Language Models know about Urban Environments?

Abstract

Vision-language models (VLMs) show great promise for 3D scene understanding but are mainly applied to indoor spaces or autonomous driving, focusing on low-level tasks like segmentation. This work expands their use to urban-scale environments by leveraging 3D reconstructions from multi-view aerial imagery. We propose OpenCity3D, an approach that addresses high-level tasks, such as population density estimation, building age classification, property price prediction, crime rate assessment, and noise pollution evaluation. Our findings highlight OpenCity3D's impressive zero-shot and few-shot capabilities, showcasing adaptability to new contexts. This research establishes a new paradigm for language-driven urban analytics, enabling applications in planning, policy, and environmental monitoring. See our project page:this http URL

View on arXiv
@article{bieri2025_2503.16776,
  title={ OpenCity3D: What do Vision-Language Models know about Urban Environments? },
  author={ Valentin Bieri and Marco Zamboni and Nicolas S. Blumer and Qingxuan Chen and Francis Engelmann },
  journal={arXiv preprint arXiv:2503.16776},
  year={ 2025 }
}
Comments on this paper