Evaluation of Differential Privacy Mechanisms on Federated Learning
- FedML

Federated learning is distributed model training across several clients without disclosing raw data. Despite advancements in data privacy, risks still remain. Differential Privacy (DP) is a technique to protect sensitive data by adding noise to model updates, usually controlled by a fixed privacy budget. However, this approach can introduce excessive noise, particularly when the model converges, which compromises performance. To address this problem, adaptive privacy budgets have been investigated as a potential solution. This work implements DP methods using Laplace and Gaussian mechanisms with an adaptive privacy budget, extending the SelecEval simulator. We introduce an adaptive clipping approach in the Gaussian mechanism, ensuring that gradients of the model are dynamically updated rather than using a fixed sensitivity. We conduct extensive experiments with various privacy budgets, IID and non-IID datasets, and different numbers of selected clients per round. While our experiments were limited to 200 training rounds, the results suggest that adaptive privacy budgets and adaptive clipping can help maintain model accuracy while preserving privacy.
View on arXiv