ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.15370
177
25
v1v2v3v4 (latest)

A Principles-based Ethical Assurance Argument for AI and Autonomous Systems

AI and Ethics (AE), 2022
29 March 2022
Zoe Porter
Ibrahim Habli
John McDermid
ArXiv (abs)PDFHTML
Abstract

An assurance case presents a clear and defensible argument, supported by evidence, that a system will operate as intended in a particular context. Typically, an assurance case presents an argument that a system will be acceptably safe in its intended context. One emerging proposal within the Trustworthy AI research community is to extend and apply this methodology to provide assurance that the use of an AI system or an autonomous system (AI/AS) will be acceptably ethical in a particular context. In this paper, we advance this proposal further. We do so by presenting a principles-based ethical assurance (PBEA) argument pattern for AI/AS. The PBEA argument pattern offers a framework for reasoning about the overall ethical acceptability of the use of a given AI/AS and it could be an early prototype template for specific ethical assurance cases. The four core ethical principles that form the basis of the PBEA argument pattern are: justice; beneficence; non-maleficence; and respect for personal autonomy. Throughout, we connect stages of the argument pattern to examples of AI/AS applications. This helps to show its initial plausibility.

View on arXiv
Comments on this paper