27
0

Enhancing Security and Privacy in Federated Learning using Low-Dimensional Update Representation and Proximity-Based Defense

Abstract

Federated Learning (FL) is a promising privacy-preserving machine learning paradigm that allows data owners to collaboratively train models while keeping their data localized. Despite its potential, FL faces challenges related to the trustworthiness of both clients and servers, particularly against curious or malicious adversaries. In this paper, we introduce a novel framework named \underline{F}ederated \underline{L}earning with Low-Dimensional \underline{U}pdate \underline{R}epresentation and \underline{P}roximity-Based defense (FLURP), designed to address privacy preservation and resistance to Byzantine attacks in distributed learning environments. FLURP employs LinfSample\mathsf{LinfSample} method, enabling clients to compute the ll_{\infty} norm across sliding windows of updates, resulting in a Low-Dimensional Update Representation (LUR). Calculating the shared distance matrix among LURs, rather than updates, significantly reduces the overhead of Secure Multi-Party Computation (SMPC) by three orders of magnitude while effectively distinguishing between benign and poisoned updates. Additionally, FLURP integrates a privacy-preserving proximity-based defense mechanism utilizing optimized SMPC protocols to minimize communication rounds. Our experiments demonstrate FLURP's effectiveness in countering Byzantine adversaries with low communication and runtime overhead. FLURP offers a scalable framework for secure and reliable FL in distributed environments, facilitating its application in scenarios requiring robust data management and security.

View on arXiv
@article{li2025_2405.18802,
  title={ Enhancing Security and Privacy in Federated Learning using Low-Dimensional Update Representation and Proximity-Based Defense },
  author={ Wenjie Li and Kai Fan and Jingyuan Zhang and Hui Li and Wei Yang Bryan Lim and Qiang Yang },
  journal={arXiv preprint arXiv:2405.18802},
  year={ 2025 }
}
Comments on this paper