17
0

A Multi-Layered Research Framework for Human-Centered AI: Defining the Path to Explainability and Trust

Abstract

The integration of Artificial Intelligence (AI) into high-stakes domains such as healthcare, finance, and autonomous systems is often constrained by concerns over transparency, interpretability, and trust. While Human-Centered AI (HCAI) emphasizes alignment with human values, Explainable AI (XAI) enhances transparency by making AI decisions more understandable. However, the lack of a unified approach limits AI's effectiveness in critical decision-making scenarios. This paper presents a novel three-layered framework that bridges HCAI and XAI to establish a structured explainability paradigm. The framework comprises (1) a foundational AI model with built-in explainability mechanisms, (2) a human-centered explanation layer that tailors explanations based on cognitive load and user expertise, and (3) a dynamic feedback loop that refines explanations through real-time user interaction. The framework is evaluated across healthcare, finance, and software development, demonstrating its potential to enhance decision-making, regulatory compliance, and public trust. Our findings advance Human-Centered Explainable AI (HCXAI), fostering AI systems that are transparent, adaptable, and ethically aligned.

View on arXiv
@article{silva2025_2504.13926,
  title={ A Multi-Layered Research Framework for Human-Centered AI: Defining the Path to Explainability and Trust },
  author={ Chameera De Silva and Thilina Halloluwa and Dhaval Vyas },
  journal={arXiv preprint arXiv:2504.13926},
  year={ 2025 }
}
Comments on this paper