ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.21119
26
43

A Unified Solution to Diverse Heterogeneities in One-shot Federated Learning

28 October 2024
Jun Bai
Yiliao Song
Di Wu
Atul Sajjanhar
Yong Xiang
Wei Zhou
Xiaohui Tao
Yan Li
Y. Li
    FedML
ArXivPDFHTML
Abstract

One-Shot Federated Learning (OSFL) restricts communication between the server and clients to a single round, significantly reducing communication costs and minimizing privacy leakage risks compared to traditional Federated Learning (FL), which requires multiple rounds of communication. However, existing OSFL frameworks remain vulnerable to distributional heterogeneity, as they primarily focus on model heterogeneity while neglecting data heterogeneity. To bridge this gap, we propose FedHydra, a unified, data-free, OSFL framework designed to effectively address both model and data heterogeneity. Unlike existing OSFL approaches, FedHydra introduces a novel two-stage learning mechanism. Specifically, it incorporates model stratification and heterogeneity-aware stratified aggregation to mitigate the challenges posed by both model and data heterogeneity. By this design, the data and model heterogeneity issues are simultaneously monitored from different aspects during learning. Consequently, FedHydra can effectively mitigate both issues by minimizing their inherent conflicts. We compared FedHydra with five SOTA baselines on four benchmark datasets. Experimental results show that our method outperforms the previous OSFL methods in both homogeneous and heterogeneous settings. Our code is available atthis https URL.

View on arXiv
@article{bai2025_2410.21119,
  title={ A Unified Solution to Diverse Heterogeneities in One-shot Federated Learning },
  author={ Jun Bai and Yiliao Song and Di Wu and Atul Sajjanhar and Yong Xiang and Wei Zhou and Xiaohui Tao and Yan Li and Yue Li },
  journal={arXiv preprint arXiv:2410.21119},
  year={ 2025 }
}
Comments on this paper