EBS-CFL: Efficient and Byzantine-robust Secure Clustered Federated Learning
- FedML

Despite federated learning (FL)'s potential in collaborative learning, its performance has deteriorated due to the data heterogeneity of distributed users. Recently, clustered federated learning (CFL) has emerged to address this challenge by partitioning users into clusters according to their similarity. However, CFL faces difficulties in training when users are unwilling to share their cluster identities due to privacy concerns. To address these issues, we present an innovative Efficient and Robust Secure Aggregation scheme for CFL, dubbed EBS-CFL. The proposed EBS-CFL supports effectively training CFL while maintaining users' cluster identity confidentially. Moreover, it detects potential poisonous attacks without compromising individual client gradients by discarding negatively correlated gradients and aggregating positively correlated ones using a weighted approach. The server also authenticates correct gradient encoding by clients. EBS-CFL has high efficiency with client-side overhead O(ml + m^2) for communication and O(m^2l) for computation, where m is the number of cluster identities, and l is the gradient size. When m = 1, EBS-CFL's computational efficiency of client is at least O(log n) times better than comparison schemes, where n is the number ofthis http URLaddition, we validate the scheme through extensive experiments. Finally, we theoretically prove the scheme's security.
View on arXiv@article{li2025_2506.13612, title={ EBS-CFL: Efficient and Byzantine-robust Secure Clustered Federated Learning }, author={ Zhiqiang Li and Haiyong Bao and Menghong Guan and Hao Pan and Cheng Huang and Hong-Ning Dai }, journal={arXiv preprint arXiv:2506.13612}, year={ 2025 } }