Cutting Through Privacy: A Hyperplane-Based Data Reconstruction Attack in Federated Learning

Federated Learning (FL) enables collaborative training of machine learning models across distributed clients without sharing raw data, ostensibly preserving data privacy. Nevertheless, recent studies have revealed critical vulnerabilities in FL, showing that a malicious central server can manipulate model updates to reconstruct clients' private training data. Existing data reconstruction attacks have important limitations: they often rely on assumptions about the clients' data distribution or their efficiency significantly degrades when batch sizes exceed just a few tens of samples.In this work, we introduce a novel data reconstruction attack that overcomes these limitations. Our method leverages a new geometric perspective on fully connected layers to craft malicious model parameters, enabling the perfect recovery of arbitrarily large data batches in classification tasks without any prior knowledge of clients' data. Through extensive experiments on both image and tabular datasets, we demonstrate that our attack outperforms existing methods and achieves perfect reconstruction of data batches two orders of magnitude larger than the state of the art.
View on arXiv@article{diana2025_2505.10264, title={ Cutting Through Privacy: A Hyperplane-Based Data Reconstruction Attack in Federated Learning }, author={ Francesco Diana and André Nusser and Chuan Xu and Giovanni Neglia }, journal={arXiv preprint arXiv:2505.10264}, year={ 2025 } }