Evaluating the Goal-Directedness of Large Language Models

Abstract
To what extent do LLMs use their capabilities towards their given goal? We take this as a measure of their goal-directedness. We evaluate goal-directedness on tasks that require information gathering, cognitive effort, and plan execution, where we use subtasks to infer each model's relevant capabilities. Our evaluations of LLMs from Google DeepMind, OpenAI, and Anthropic show that goal-directedness is relatively consistent across tasks, differs from task performance, and is only moderately sensitive to motivational prompts. Notably, most models are not fully goal-directed. We hope our goal-directedness evaluations will enable better monitoring of LLM progress, and enable more deliberate design choices of agentic properties in LLMs.
View on arXiv@article{everitt2025_2504.11844, title={ Evaluating the Goal-Directedness of Large Language Models }, author={ Tom Everitt and Cristina Garbacea and Alexis Bellot and Jonathan Richens and Henry Papadatos and Siméon Campos and Rohin Shah }, journal={arXiv preprint arXiv:2504.11844}, year={ 2025 } }
Comments on this paper