239
v1v2 (latest)

UAV-VLA: Vision-Language-Action System for Large Scale Aerial Mission Generation

IEEE/ACM International Conference on Human-Robot Interaction (HRI), 2025
Main:4 Pages
7 Figures
Bibliography:1 Pages
1 Tables
Abstract

The UAV-VLA (Visual-Language-Action) system is a tool designed to facilitate communication with aerial robots. By integrating satellite imagery processing with the Visual Language Model (VLM) and the powerful capabilities of GPT, UAV-VLA enables users to generate general flight paths-and-action plans through simple text requests. This system leverages the rich contextual information provided by satellite images, allowing for enhanced decision-making and mission planning. The combination of visual analysis by VLM and natural language processing by GPT can provide the user with the path-and-action set, making aerial operations more efficient and accessible. The newly developed method showed the difference in the length of the created trajectory in 22% and the mean error in finding the objects of interest on a map in 34.22 m by Euclidean distance in the K-Nearest Neighbors (KNN) approach.

View on arXiv
Comments on this paper