The joint implementation of federated learning (FL) and explainable artificial intelligence (XAI) could allow training models from distributed data and explaining their inner workings while preserving essential aspects of privacy. Toward establishing the benefits and tensions associated with their interplay, this scoping review maps the publications that jointly deal with FL and XAI, focusing on publications that reported an interplay between FL and model interpretability or post-hoc explanations. Out of the 37 studies meeting our criteria, only one explicitly and quantitatively analyzed the influence of FL on model explanations, revealing a significant research gap. The aggregation of interpretability metrics across FL nodes created generalized global insights at the expense of node-specific patterns being diluted. Several studies proposed FL algorithms incorporating explanation methods to safeguard the learning process against defaulting or malicious nodes. Studies using established FL libraries or following reporting guidelines are a minority. More quantitative research and structured, transparent practices are needed to fully understand their mutual impact and under which conditions it happens.
View on arXiv@article{lopez-ramos2025_2411.05874, title={ Interplay between Federated Learning and Explainable Artificial Intelligence: a Scoping Review }, author={ Luis M. Lopez-Ramos and Florian Leiser and Aditya Rastogi and Steven Hicks and Inga Strümke and Vince I. Madai and Tobias Budig and Ali Sunyaev and Adam Hilbert }, journal={arXiv preprint arXiv:2411.05874}, year={ 2025 } }