21
2

DART-LLM: Dependency-Aware Multi-Robot Task Decomposition and Execution using Large Language Models

Abstract

Large Language Models (LLMs) have demonstrated promising reasoning capabilities in robotics; however, their application in multi-robot systems remains limited, particularly in handling task dependencies. This paper introduces DART-LLM, a novel framework that employs Directed Acyclic Graphs (DAGs) to model task dependencies, enabling the decomposition of natural language instructions into well-coordinated subtasks for multi-robot execution. DART-LLM comprises four key components: a Question-Answering (QA) LLM module for dependency-aware task decomposition, a Breakdown Function module for robot assignment, an Actuation module for execution, and a Vision-Language Model (VLM)-based object detector for environmental perception, achieving end-to-end task execution. Experimental results across three task complexity levels demonstrate that DART-LLM achieves state-of-the-art performance, significantly outperforming the baseline across all evaluation metrics. Among the tested models, DeepSeek-r1-671B achieves the highest success rate, whereas Llama-3.1-8B exhibits superior response time reliability. Ablation studies further confirm that explicit dependency modeling notably enhances the performance of smaller models, facilitating efficient deployment on resource-constrained platforms. Please refer to the project websitethis https URLfor videos and code.

View on arXiv
@article{wang2025_2411.09022,
  title={ DART-LLM: Dependency-Aware Multi-Robot Task Decomposition and Execution using Large Language Models },
  author={ Yongdong Wang and Runze Xiao and Jun Younes Louhi Kasahara and Ryosuke Yajima and Keiji Nagatani and Atsushi Yamashita and Hajime Asama },
  journal={arXiv preprint arXiv:2411.09022},
  year={ 2025 }
}
Comments on this paper