24
0

Measuring AI agent autonomy: Towards a scalable approach with code inspection

Abstract

AI agents are AI systems that can achieve complex goals autonomously. Assessing the level of agent autonomy is crucial for understanding both their potential benefits and risks. Current assessments of autonomy often focus on specific risks and rely on run-time evaluations -- observations of agent actions during operation. We introduce a code-based assessment of autonomy that eliminates the need to run an AI agent to perform specific tasks, thereby reducing the costs and risks associated with run-time evaluations. Using this code-based framework, the orchestration code used to run an AI agent can be scored according to a taxonomy that assesses attributes of autonomy: impact and oversight. We demonstrate this approach with the AutoGen framework and select applications.

View on arXiv
@article{cihon2025_2502.15212,
  title={ Measuring AI agent autonomy: Towards a scalable approach with code inspection },
  author={ Peter Cihon and Merlin Stein and Gagan Bansal and Sam Manning and Kevin Xu },
  journal={arXiv preprint arXiv:2502.15212},
  year={ 2025 }
}
Comments on this paper