CFIRE: A General Method for Combining Local Explanations

We propose a novel eXplainable AI algorithm to compute faithful, easy-to-understand, and complete global decision rules from local explanations for tabular data by combining XAI methods with closed frequent itemset mining. Our method can be used with any local explainer that indicates which dimensions are important for a given sample for a given black-box decision. This property allows our algorithm to choose among different local explainers, addressing the disagreement problem, \ie the observation that no single explanation method consistently outperforms others across models and datasets. Unlike usual experimental methodology, our evaluation also accounts for the Rashomon effect in model explainability. To this end, we demonstrate the robustness of our approach in finding suitable rules for nearly all of the 700 black-box models we considered across 14 benchmark datasets. The results also show that our method exhibits improved runtime, high precision and F1-score while generating compact and complete rules.
View on arXiv@article{müller2025_2504.00930, title={ CFIRE: A General Method for Combining Local Explanations }, author={ Sebastian Müller and Vanessa Toborek and Tamás Horváth and Christian Bauckhage }, journal={arXiv preprint arXiv:2504.00930}, year={ 2025 } }