47
0

ExplainReduce: Summarising local explanations via proxies

Abstract

Most commonly used non-linear machine learning methods are closed-box models, uninterpretable to humans. The field of explainable artificial intelligence (XAI) aims to develop tools to examine the inner workings of these closed boxes. An often-used model-agnostic approach to XAI involves using simple models as local approximations to produce so-called local explanations; examples of this approach include LIME, SHAP, and SLISEMAP. This paper shows how a large set of local explanations can be reduced to a small "proxy set" of simple models, which can act as a generative global explanation. This reduction procedure, ExplainReduce, can be formulated as an optimisation problem and approximated efficiently using greedy heuristics.

View on arXiv
@article{seppäläinen2025_2502.10311,
  title={ ExplainReduce: Summarising local explanations via proxies },
  author={ Lauri Seppäläinen and Mudong Guo and Kai Puolamäki },
  journal={arXiv preprint arXiv:2502.10311},
  year={ 2025 }
}
Comments on this paper