ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.04451
74
0

Privacy Preserving and Robust Aggregation for Cross-Silo Federated Learning in Non-IID Settings

6 March 2025
Marco Arazzi
Mert Cihangiroglu
Antonino Nocera
    FedML
ArXivPDFHTML
Abstract

Federated Averaging remains the most widely used aggregation strategy in federated learning due to its simplicity and scalability. However, its performance degrades significantly in non-IID data settings, where client distributions are highly imbalanced or skewed. Additionally, it relies on clients transmitting metadata, specifically the number of training samples, which introduces privacy risks and may conflict with regulatory frameworks like the European GDPR. In this paper, we propose a novel aggregation strategy that addresses these challenges by introducing class-aware gradient masking. Unlike traditional approaches, our method relies solely on gradient updates, eliminating the need for any additional client metadata, thereby enhancing privacy protection. Furthermore, our approach validates and dynamically weights client contributions based on class-specific importance, ensuring robustness against non-IID distributions, convergence prevention, and backdoor attacks. Extensive experiments on benchmark datasets demonstrate that our method not only outperforms FedAvg and other widely accepted aggregation strategies in non-IID settings but also preserves model integrity in adversarial scenarios. Our results establish the effectiveness of gradient masking as a practical and secure solution for federated learning.

View on arXiv
@article{arazzi2025_2503.04451,
  title={ Privacy Preserving and Robust Aggregation for Cross-Silo Federated Learning in Non-IID Settings },
  author={ Marco Arazzi and Mert Cihangiroglu and Antonino Nocera },
  journal={arXiv preprint arXiv:2503.04451},
  year={ 2025 }
}
Comments on this paper