27
0

PLANET: A Collection of Benchmarks for Evaluating LLMs' Planning Capabilities

Abstract

Planning is central to agents and agentic AI. The ability to plan, e.g., creating travel itineraries within a budget, holds immense potential in both scientific and commercial contexts. Moreover, optimal plans tend to require fewer resources compared to ad-hoc methods. To date, a comprehensive understanding of existing planning benchmarks appears to be lacking. Without it, comparing planning algorithms' performance across domains or selecting suitable algorithms for new scenarios remains challenging. In this paper, we examine a range of planning benchmarks to identify commonly used testbeds for algorithm development and highlight potential gaps. These benchmarks are categorized into embodied environments, web navigation, scheduling, games and puzzles, and everyday task automation. Our study recommends the most appropriate benchmarks for various algorithms and offers insights to guide future benchmark development.

View on arXiv
@article{li2025_2504.14773,
  title={ PLANET: A Collection of Benchmarks for Evaluating LLMs' Planning Capabilities },
  author={ Haoming Li and Zhaoliang Chen and Jonathan Zhang and Fei Liu },
  journal={arXiv preprint arXiv:2504.14773},
  year={ 2025 }
}
Comments on this paper