Gradient-Leaks: Understanding and Controlling Deanonymization in
Federated Learning
- PICVFedML
Federated Learning (FL) systems are gaining popularity as a solution to training Machine Learning (ML) models from large-scale user data collected on personal devices (e.g., smartphones) without their raw data leaving the device. At the core of FL is a network of anonymous user devices sharing minimal training information (model parameter deltas) computed locally on personal data. However, the degree to which user-specific information is encoded in the model deltas is poorly understood. In this paper, we identify model deltas encode subtle variations in which users capture and generate data. The variations provide a powerful statistical signal, allowing an adversary to effectively deanonymize participating devices using a limited set of auxiliary data. We analyze resulting deanonymization attacks on diverse tasks on real-world (anonymized) user-generated data across a range of closed- and open-world scenarios. We study various strategies to mitigate the risks of deanonymization. As random perturbation methods do not offer convincing operating points, we propose data-augmentation strategies which introduces adversarial biases in device data and thereby, offer substantial protection against deanonymization threats with little effect on utility.
View on arXiv