57
0

FusDreamer: Label-efficient Remote Sensing World Model for Multimodal Data Classification

Abstract

World models significantly enhance hierarchical understanding, improving data integration and learning efficiency. To explore the potential of the world model in the remote sensing (RS) field, this paper proposes a label-efficient remote sensing world model for multimodal data fusion (FusDreamer). The FusDreamer uses the world model as a unified representation container to abstract common and high-level knowledge, promoting interactions across different types of data, \emph{i.e.}, hyperspectral (HSI), light detection and ranging (LiDAR), and text data. Initially, a new latent diffusion fusion and multimodal generation paradigm (LaMG) is utilized for its exceptional information integration and detail retention capabilities. Subsequently, an open-world knowledge-guided consistency projection (OK-CP) module incorporates prompt representations for visually described objects and aligns language-visual features through contrastive learning. In this way, the domain gap can be bridged by fine-tuning the pre-trained world models with limited samples. Finally, an end-to-end multitask combinatorial optimization (MuCO) strategy can capture slight feature bias and constrain the diffusion process in a collaboratively learnable direction. Experiments conducted on four typical datasets indicate the effectiveness and advantages of the proposed FusDreamer. The corresponding code will be released atthis https URL.

View on arXiv
@article{wang2025_2503.13814,
  title={ FusDreamer: Label-efficient Remote Sensing World Model for Multimodal Data Classification },
  author={ Jinping Wang and Weiwei Song and Hao Chen and Jinchang Ren and Huimin Zhao },
  journal={arXiv preprint arXiv:2503.13814},
  year={ 2025 }
}
Comments on this paper