ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.15969
43
1

Beyond the Visible: Multispectral Vision-Language Learning for Earth Observation

20 March 2025
Clive Tinashe Marimo
Benedikt Blumenstiel
Maximilian Nitsche
Johannes Jakubik
Thomas Brunschwiler
ArXivPDFHTML
Abstract

Vision-language models for Earth observation (EO) typically rely on the visual spectrum of data as the only model input, thus failing to leverage the rich spectral information available in the multispectral channels recorded by satellites. Therefore, in this paper, we introduce Llama3-MS-CLIP, the first vision-language model pre-trained with contrastive learning on a large-scale multispectral dataset and report on the performance gains due to the extended spectral range. Furthermore, we present the largest-to-date image-caption dataset for multispectral data, consisting of one million Sentinel-2 samples and corresponding textual descriptions generated with Llama3-LLaVA-Next and Overture Maps data. We develop a scalable captioning pipeline, which is validated by domain experts. We evaluate Llama3-MS-CLIP on multispectral zero-shot image classification and retrieval using three datasets of varying complexity. Our results demonstrate that Llama3-MS-CLIP significantly outperforms other RGB-based approaches, improving classification accuracy by 6.77% on average and retrieval performance by 4.63% mAP compared to the second-best model. Our results emphasize the relevance of multispectral vision-language learning. We release the image-caption dataset, code, and model weights under an open-source license.

View on arXiv
@article{marimo2025_2503.15969,
  title={ Beyond the Visible: Multispectral Vision-Language Learning for Earth Observation },
  author={ Clive Tinashe Marimo and Benedikt Blumenstiel and Maximilian Nitsche and Johannes Jakubik and Thomas Brunschwiler },
  journal={arXiv preprint arXiv:2503.15969},
  year={ 2025 }
}
Comments on this paper