53
1

Agentic AI Needs a Systems Theory

Abstract

The endowment of AI with reasoning capabilities and some degree of agency is widely viewed as a path toward more capable and generalizable systems. Our position is that the current development of agentic AI requires a more holistic, systems-theoretic perspective in order to fully understand their capabilities and mitigate any emergent risks. The primary motivation for our position is that AI development is currently overly focused on individual model capabilities, often ignoring broader emergent behavior, leading to a significant underestimation in the true capabilities and associated risks of agentic AI. We describe some fundamental mechanisms by which advanced capabilities can emerge from (comparably simpler) agents simply due to their interaction with the environment and other agents. Informed by an extensive amount of existing literature from various fields, we outline mechanisms for enhanced agent cognition, emergent causal reasoning ability, and metacognitive awareness. We conclude by presenting some key open challenges and guidance for the development of agentic AI. We emphasize that a systems-level perspective is essential for better understanding, and purposefully shaping, agentic AI systems.

View on arXiv
@article{miehling2025_2503.00237,
  title={ Agentic AI Needs a Systems Theory },
  author={ Erik Miehling and Karthikeyan Natesan Ramamurthy and Kush R. Varshney and Matthew Riemer and Djallel Bouneffouf and John T. Richards and Amit Dhurandhar and Elizabeth M. Daly and Michael Hind and Prasanna Sattigeri and Dennis Wei and Ambrish Rawat and Jasmina Gajcin and Werner Geyer },
  journal={arXiv preprint arXiv:2503.00237},
  year={ 2025 }
}
Comments on this paper