23
0

UEMM-Air: Make Unmanned Aerial Vehicles Perform More Multi-modal Tasks

Liang Yao
Liang Yao
Shengxiang Xu
Chuanyi Zhang
Xinlei Zhang
Ting Wu
Zequan Wang
Shimin Di
Jun Zhou
Abstract

The development of multi-modal learning for Unmanned Aerial Vehicles (UAVs) typically relies on a large amount of pixel-aligned multi-modal image data. However, existing datasets face challenges such as limited modalities, high construction costs, and imprecise annotations. To this end, we propose a synthetic multi-modal UAV-based multi-task dataset, UEMM-Air. Specifically, we simulate various UAV flight scenarios and object types using the Unreal Engine (UE). Then we design the UAV's flight logic to automatically collect data from different scenarios, perspectives, and altitudes. Furthermore, we propose a novel heuristic automatic annotation algorithm to generate accurate object detection labels. Finally, we utilize labels to generate text descriptions of images to make our UEMM-Air support more cross-modality tasks. In total, our UEMM-Air consists of 120k pairs of images with 6 modalities and precise annotations. Moreover, we conduct numerous experiments and establish new benchmark results on our dataset. We also found that models pre-trained on UEMM-Air exhibit better performance on downstream tasks compared to other similar datasets. The dataset is publicly available (this https URL) to support the research of multi-modal tasks on UAVs.

View on arXiv
@article{yao2025_2406.06230,
  title={ UEMM-Air: Make Unmanned Aerial Vehicles Perform More Multi-modal Tasks },
  author={ Liang Yao and Fan Liu and Shengxiang Xu and Chuanyi Zhang and Xing Ma and Jianyu Jiang and Zequan Wang and Shimin Di and Jun Zhou },
  journal={arXiv preprint arXiv:2406.06230},
  year={ 2025 }
}
Comments on this paper