Efficient Full-Stack Private Federated Deep Learning with Post-Quantum Security

Federated learning (FL) enables collaborative model training while preserving user data privacy by keeping data local. Despite these advantages, FL remains vulnerable to privacy attacks on user updates and model parameters during training and deployment. Secure aggregation protocols have been proposed to protect user updates by encrypting them, but these methods often incur high computational costs and are not resistant to quantum computers. Additionally, differential privacy (DP) has been used to mitigate privacy leakages, but existing methods focus on secure aggregation or DP, neglecting their potential synergies. To address these gaps, we introduce Beskar, a novel framework that provides post-quantum secure aggregation, optimizes computational overhead for FL settings, and defines a comprehensive threat model that accounts for a wide spectrum of adversaries. We also integrate DP into different stages of FL training to enhance privacy protection in diverse scenarios. Our framework provides a detailed analysis of the trade-offs between security, performance, and model accuracy, representing the first thorough examination of secure aggregation protocols combined with various DP approaches for post-quantum secure FL. Beskar aims to address the pressing privacy and security issues FL while ensuring quantum-safety and robust performance.
View on arXiv@article{zhang2025_2505.05751, title={ Efficient Full-Stack Private Federated Deep Learning with Post-Quantum Security }, author={ Yiwei Zhang and Rouzbeh Behnia and Attila A. Yavuz and Reza Ebrahimi and Elisa Bertino }, journal={arXiv preprint arXiv:2505.05751}, year={ 2025 } }