84

On the Out-of-Distribution Backdoor Attack for Federated Learning

Main:8 Pages
4 Figures
Bibliography:2 Pages
10 Tables
Abstract

Traditional backdoor attacks in federated learning (FL) operate within constrained attack scenarios, as they depend on visible triggers and require physical modifications to the target object, which limits their practicality. To address this limitation, we introduce a novel backdoor attack prototype for FL called the out-of-distribution (OOD) backdoor attack (OBA\mathtt{OBA}), which uses OOD data as both poisoned samples and triggers simultaneously. Our approach significantly broadens the scope of backdoor attack scenarios in FL. To improve the stealthiness of OBA\mathtt{OBA}, we propose SoDa\mathtt{SoDa}, which regularizes both the magnitude and direction of malicious local models during local training, aligning them closely with their benign versions to evade detection. Empirical results demonstrate that OBA\mathtt{OBA} effectively circumvents state-of-the-art defenses while maintaining high accuracy on the main task.To address this security vulnerability in the FL system, we introduce BNGuard\mathtt{BNGuard}, a new server-side defense method tailored against SoDa\mathtt{SoDa}. BNGuard\mathtt{BNGuard} leverages the observation that OOD data causes significant deviations in the running statistics of batch normalization layers. This allows BNGuard\mathtt{BNGuard} to identify malicious model updates and exclude them from aggregation, thereby enhancing the backdoor robustness of FL. Extensive experiments across various settings show the effectiveness of BNGuard\mathtt{BNGuard} on defending against SoDa\mathtt{SoDa}. The code is available atthis https URL.

View on arXiv
Comments on this paper