14
0

Evaluating Explanation Quality in X-IDS Using Feature Alignment Metrics

Abstract

Explainable artificial intelligence (XAI) methods have become increasingly important in the context of explainable intrusion detection systems (X-IDSs) for improving the interpretability and trustworthiness of X-IDSs. However, existing evaluation approaches for XAI focus on model-specific properties such as fidelity and simplicity, and neglect whether the explanation content is meaningful or useful within the application domain. In this paper, we introduce new evaluation metrics measuring the quality of explanations from X-IDSs. The metrics aim at quantifying how well explanations are aligned with predefined feature sets that can be identified from domain-specific knowledge bases. Such alignment with these knowledge bases enables explanations to reflect domain knowledge and enables meaningful and actionable insights for security analysts. In our evaluation, we demonstrate the use of the proposed metrics to evaluate the quality of explanations from X-IDSs. The experimental results show that the proposed metrics can offer meaningful differences in explanation quality across X-IDSs and attack types, and assess how well X-IDS explanations reflect known domain knowledge. The findings of the proposed metrics provide actionable insights for security analysts to improve the interpretability of X-IDS in practical settings.

View on arXiv
@article{alquliti2025_2505.08006,
  title={ Evaluating Explanation Quality in X-IDS Using Feature Alignment Metrics },
  author={ Mohammed Alquliti and Erisa Karafili and BooJoong Kang },
  journal={arXiv preprint arXiv:2505.08006},
  year={ 2025 }
}
Comments on this paper