ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.01667
54
0

EarthMind: Towards Multi-Granular and Multi-Sensor Earth Observation with Large Multimodal Models

2 June 2025
Yan Shu
Bin Ren
Zhitong Xiong
Danda Pani Paudel
Luc Van Gool
Begüm Demir
N. Sebe
Paolo Rota
    VLM
ArXiv (abs)PDFHTML
Main:9 Pages
7 Figures
Bibliography:5 Pages
11 Tables
Appendix:7 Pages
Abstract

Large Multimodal Models (LMMs) have demonstrated strong performance in various vision-language tasks. However, they often struggle to comprehensively understand Earth Observation (EO) data, which is critical for monitoring the environment and the effects of human activity on it. In this work, we present EarthMind, a novel vision-language framework for multi-granular and multi-sensor EO data understanding. EarthMind features two core components: (1) Spatial Attention Prompting (SAP), which reallocates attention within the LLM to enhance pixel-level understanding; and (2) Cross-modal Fusion, which aligns heterogeneous modalities into a shared space and adaptively reweighs tokens based on their information density for effective fusion. To facilitate multi-sensor fusion evaluation, we propose EarthMind-Bench, a comprehensive benchmark with over 2,000 human-annotated multi-sensor image-question pairs, covering a wide range of perception and reasoning tasks. Extensive experiments demonstrate the effectiveness of EarthMind. It achieves state-of-the-art performance on EarthMind-Bench, surpassing GPT-4o despite being only 4B in scale. Moreover, EarthMind outperforms existing methods on multiple public EO benchmarks, showcasing its potential to handle both multi-granular and multi-sensor challenges in a unified framework.

View on arXiv
@article{shu2025_2506.01667,
  title={ EarthMind: Towards Multi-Granular and Multi-Sensor Earth Observation with Large Multimodal Models },
  author={ Yan Shu and Bin Ren and Zhitong Xiong and Danda Pani Paudel and Luc Van Gool and Begum Demir and Nicu Sebe and Paolo Rota },
  journal={arXiv preprint arXiv:2506.01667},
  year={ 2025 }
}
Comments on this paper