ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.16326
64
0

OmniGeo: Towards a Multimodal Large Language Models for Geospatial Artificial Intelligence

20 March 2025
Long Yuan
Fengran Mo
Kaiyu Huang
Wenjie Wang
Wangyuxuan Zhai
Xiaoyu Zhu
You Li
Jinan Xu
Jian-Yun Nie
    SyDa
ArXivPDFHTML
Abstract

The rapid advancement of multimodal large language models (LLMs) has opened new frontiers in artificial intelligence, enabling the integration of diverse large-scale data types such as text, images, and spatial information. In this paper, we explore the potential of multimodal LLMs (MLLM) for geospatial artificial intelligence (GeoAI), a field that leverages spatial data to address challenges in domains including Geospatial Semantics, Health Geography, Urban Geography, Urban Perception, and Remote Sensing. We propose a MLLM (OmniGeo) tailored to geospatial applications, capable of processing and analyzing heterogeneous data sources, including satellite imagery, geospatial metadata, and textual descriptions. By combining the strengths of natural language understanding and spatial reasoning, our model enhances the ability of instruction following and the accuracy of GeoAI systems. Results demonstrate that our model outperforms task-specific models and existing LLMs on diverse geospatial tasks, effectively addressing the multimodality nature while achieving competitive results on the zero-shot geospatial tasks. Our code will be released after publication.

View on arXiv
@article{yuan2025_2503.16326,
  title={ OmniGeo: Towards a Multimodal Large Language Models for Geospatial Artificial Intelligence },
  author={ Long Yuan and Fengran Mo and Kaiyu Huang and Wenjie Wang and Wangyuxuan Zhai and Xiaoyu Zhu and You Li and Jinan Xu and Jian-Yun Nie },
  journal={arXiv preprint arXiv:2503.16326},
  year={ 2025 }
}
Comments on this paper