10
0

Defending the Edge: Representative-Attention for Mitigating Backdoor Attacks in Federated Learning

Abstract

Federated learning (FL) enhances privacy and reduces communication cost for resource-constrained edge clients by supporting distributed model training at the edge. However, the heterogeneous nature of such devices produces diverse, non-independent, and identically distributed (non-IID) data, making the detection of backdoor attacks more challenging. In this paper, we propose a novel federated representative-attention-based defense mechanism, named FeRA, that leverages cross-client attention over internal feature representations to distinguish benign from malicious clients. FeRA computes an anomaly score based on representation reconstruction errors, effectively identifying clients whose internal activations significantly deviate from the group consensus. Our evaluation demonstrates FeRA's robustness across various FL scenarios, including challenging non-IID data distributions typical of edge devices. Experimental results show that it effectively reduces backdoor attack success rates while maintaining high accuracy on the main task. The method is model-agnostic, attack-agnostic, and does not require labeled reference data, making it well suited to heterogeneous and resource-limited edge deployments.

View on arXiv
@article{obioma2025_2505.10297,
  title={ Defending the Edge: Representative-Attention for Mitigating Backdoor Attacks in Federated Learning },
  author={ Chibueze Peace Obioma and Youcheng Sun and Mustafa A. Mustafa },
  journal={arXiv preprint arXiv:2505.10297},
  year={ 2025 }
}
Comments on this paper