334

The Grammar of Interactive Explanatory Model Analysis

Data mining and knowledge discovery (DMKD), 2020
Abstract

When analysing a complex system, very often an answer to one question raises new questions. This also applies to the explanatory analysis of machine learning models. We cannot sufficiently explain a complex model using a single method that gives only one perspective. Isolated explanations are prone to misunderstanding, which inevitably leads to wrong reasoning. Surprisingly, the majority of methods developed for Explainable Artificial Intelligence (XAI) focus on a single aspect of the model behaviour. In this paper, we show the problem of model explainability as an interactive and sequential analysis of a model. We show how different XAI methods complement each other and why it is essential to juxtapose them together. The proposed process of Interactive Explanatory Model Analysis (IEMA) derives from the theoretical, algorithmic side of the model explanation and aims to embrace ideas developed in cognitive sciences. Its grammar is implemented in the modelStudio framework that adopts interactivity, customisability and automation as its main traits.

View on arXiv
Comments on this paper