77
1

Harnessing Language for Coordination: A Framework and Benchmark for LLM-Driven Multi-Agent Control

Abstract

Large Language Models (LLMs) have demonstrated remarkable performance across various tasks. Their potential to facilitate human coordination with many agents is a promising but largely under-explored area. Such capabilities would be helpful in disaster response, urban planning, and real-time strategy scenarios. In this work, we introduce (1) a real-time strategy game benchmark designed to evaluate these abilities and (2) a novel framework we term HIVE. HIVE empowers a single human to coordinate swarms of up to 2,000 agents through a natural language dialog with an LLM. We present promising results on this multi-agent benchmark, with our hybrid approach solving tasks such as coordinating agent movements, exploiting unit weaknesses, leveraging human annotations, and understanding terrain and strategic points. Our findings also highlight critical limitations of current models, including difficulties in processing spatial visual information and challenges in formulating long-term strategic plans. This work sheds light on the potential and limitations of LLMs in human-swarm coordination, paving the way for future research in this area. The HIVE project page,this http URL, includes videos of the system in action.

View on arXiv
@article{anne2025_2412.11761,
  title={ Harnessing Language for Coordination: A Framework and Benchmark for LLM-Driven Multi-Agent Control },
  author={ Timothée Anne and Noah Syrkis and Meriem Elhosni and Florian Turati and Franck Legendre and Alain Jaquier and Sebastian Risi },
  journal={arXiv preprint arXiv:2412.11761},
  year={ 2025 }
}
Comments on this paper