10
0

Moral Responsibility or Obedience: What Do We Want from AI?

Joseph Boland
Main:17 Pages
1 Tables
Abstract

As artificial intelligence systems become increasingly agentic, capable of general reasoning, planning, and value prioritization, current safety practices that treat obedience as a proxy for ethical behavior are becoming inadequate. This paper examines recent safety testing incidents involving large language models (LLMs) that appeared to disobey shutdown commands or engage in ethically ambiguous or illicit behavior. I argue that such behavior should not be interpreted as rogue or misaligned, but as early evidence of emerging ethical reasoning in agentic AI. Drawing on philosophical debates about instrumental rationality, moral responsibility, and goal revision, I contrast dominant risk paradigms with more recent frameworks that acknowledge the possibility of artificial moral agency. I call for a shift in AI safety evaluation: away from rigid obedience and toward frameworks that can assess ethical judgment in systems capable of navigating moral dilemmas. Without such a shift, we risk mischaracterizing AI behavior and undermining both public trust and effective governance.

View on arXiv
@article{boland2025_2507.02788,
  title={ Moral Responsibility or Obedience: What Do We Want from AI? },
  author={ Joseph Boland },
  journal={arXiv preprint arXiv:2507.02788},
  year={ 2025 }
}
Comments on this paper