A Scoresheet for Explainable AI
Explainability is important for the transparency of autonomous and intelligent systems and for helping to support the development of appropriate levels of trust. There has been considerable work on developing approaches for explaining systems and there are standards that specify requirements for transparency. However, there is a gap: the standards are too high-level and do not adequately specify requirements for explainability. This paper develops a scoresheet that can be used to specify explainability requirements or to assess the explainability aspects provided for particular applications. The scoresheet is developed by considering the requirements of a range of stakeholders and is applicable to Multiagent Systems as well as other AI technologies. We also provide guidance for how to use the scoresheet and illustrate its generality and usefulness by applying it to a range of applications.
View on arXiv@article{winikoff2025_2502.09861, title={ A Scoresheet for Explainable AI }, author={ Michael Winikoff and John Thangarajah and Sebastian Rodriguez }, journal={arXiv preprint arXiv:2502.09861}, year={ 2025 } }