10
0

Building Trustworthy AI by Addressing its 16+2 Desiderata with Goal-Directed Commonsense Reasoning

Main:9 Pages
Bibliography:3 Pages
1 Tables
Abstract

Current advances in AI and its applicability have highlighted the need to ensure its trustworthiness for legal, ethical, and even commercial reasons. Sub-symbolic machine learning algorithms, such as the LLMs, simulate reasoning but hallucinate and their decisions cannot be explained or audited (crucial aspects for trustworthiness). On the other hand, rule-based reasoners, such as Cyc, are able to provide the chain of reasoning steps but are complex and use a large number of reasoners. We propose a middle ground using s(CASP), a goal-directed constraint-based answer set programming reasoner that employs a small number of mechanisms to emulate reliable and explainable human-style commonsense reasoning. In this paper, we explain how s(CASP) supports the 16 desiderata for trustworthy AI introduced by Doug Lenat and Gary Marcus (2023), and two additional ones: inconsistency detection and the assumption of alternative worlds. To illustrate the feasibility and synergies of s(CASP), we present a range of diverse applications, including a conversational chatbot and a virtually embodied reasoner.

View on arXiv
@article{tudor2025_2506.12667,
  title={ Building Trustworthy AI by Addressing its 16+2 Desiderata with Goal-Directed Commonsense Reasoning },
  author={ Alexis R. Tudor and Yankai Zeng and Huaduo Wang and Joaquin Arias and Gopal Gupta },
  journal={arXiv preprint arXiv:2506.12667},
  year={ 2025 }
}
Comments on this paper