Contextualized Autonomous Drone Navigation using LLMs Deployed in Edge-Cloud Computing

Autonomous navigation is usually trained offline in diverse scenarios and fine-tuned online subject to real-world experiences. However, the real world is dynamic and changeable, and many environmental encounters/effects are not accounted for in real-time due to difficulties in describing them within offline training data or hard to describe even in online scenarios. However, we know that the human operator can describe these dynamic environmental encounters through natural language, adding semantic context. The research is to deploy Large Language Models (LLMs) to perform real-time contextual code adjustment to autonomous navigation. The challenge not evaluated in literature is what LLMs are appropriate and where should these computationally heavy algorithms sit in the computation-communication edge-cloud computing architectures. In this paper, we evaluate how different LLMs can adjust both the navigation map parameters dynamically (e.g., contour map shaping) and also derive navigation task instruction sets. We then evaluate which LLMs are most suitable and where they should sit in future edge-cloud of 6G telecommunication architectures.
View on arXiv@article{chen2025_2504.00607, title={ Contextualized Autonomous Drone Navigation using LLMs Deployed in Edge-Cloud Computing }, author={ Hongqian Chen and Yun Tang and Antonios Tsourdos and Weisi Guo }, journal={arXiv preprint arXiv:2504.00607}, year={ 2025 } }