Explanation User Interfaces: A Systematic Literature Review

Artificial Intelligence (AI) is one of the major technological advancements of this century, bearing incredible potential for users through AI-powered applications and tools in numerous domains. Being often black-box (i.e., its decision-making process is unintelligible), developers typically resort to eXplainable Artificial Intelligence (XAI) techniques to interpret the behaviour of AI models to produce systems that are transparent, fair, reliable, and trustworthy. However, presenting explanations to the user is not trivial and is often left as a secondary aspect of the system's design process, leading to AI systems that are not useful to end-users. This paper presents a Systematic Literature Review on Explanation User Interfaces (XUIs) to gain a deeper understanding of the solutions and design guidelines employed in the academic literature to effectively present explanations to users. To improve the contribution and real-world impact of this survey, we also present a framework for Human-cEnteRed developMent of Explainable user interfaceS (HERMES) to guide practitioners and academics in the design and evaluation of XUIs.
View on arXiv@article{cappuccio2025_2505.20085, title={ Explanation User Interfaces: A Systematic Literature Review }, author={ Eleonora Cappuccio and Andrea Esposito and Francesco Greco and Giuseppe Desolda and Rosa Lanzilotti and Salvatore Rinzivillo }, journal={arXiv preprint arXiv:2505.20085}, year={ 2025 } }