26
1

Unravelling Responsibility for AI

Abstract

It is widely acknowledged that we need to establish where responsibility lies for the outputs and impacts of AI-enabled systems. This is important to achieve justice and compensation for victims of AI harms, and to inform policy and engineering practice. But without a clear, thorough understanding of what `responsibility' means, deliberations about where responsibility lies will be, at best, unfocused and incomplete and, at worst, misguided. Furthermore, AI-enabled systems exist within a wider ecosystem of actors, decisions, and governance structures, giving rise to complex networks of responsibility relations. To address these issues, this paper presents a conceptual framework of responsibility, accompanied with a graphical notation and general methodology, for visualising these responsibility networks and for tracing different responsibility attributions for AI. Taking the three-part formulation Áctor A is responsible for Occurrence O,' the framework unravels the concept of responsibility to clarify that there are different possibilities of who is responsible for AI, senses in which they are responsible, and aspects of events they are responsible for. The notation allows these permutations to be represented graphically. The methodology enables users to apply the framework to specific scenarios. The aim is to offer a foundation to support stakeholders from diverse disciplinary backgrounds to discuss and address complex responsibility questions in hypothesised and real-world cases involving AI. The work is illustrated by application to a fictitious scenario of a fatal collision between a crewless, AI-enabled maritime vessel in autonomous mode and a traditional, crewed vessel at sea.

View on arXiv
@article{porter2025_2308.02608,
  title={ Unravelling Responsibility for AI },
  author={ Zoe Porter and Philippa Ryan and Phillip Morgan and Joanna Al-Qaddoumi and Bernard Twomey and Paul Noordhof and John McDermid and Ibrahim Habli },
  journal={arXiv preprint arXiv:2308.02608},
  year={ 2025 }
}
Comments on this paper