ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.06268
21
0

Cluster-Aware Multi-Round Update for Wireless Federated Learning in Heterogeneous Environments

6 May 2025
Pengcheng Sun
Erwu Liu
Wei Ni
Kanglei Yu
Rui-cang Wang
Abbas Jamalipour
    FedML
ArXivPDFHTML
Abstract

The aggregation efficiency and accuracy of wireless Federated Learning (FL) are significantly affected by resource constraints, especially in heterogeneous environments where devices exhibit distinct data distributions and communication capabilities. This paper proposes a clustering strategy that leverages prior knowledge similarity to group devices with similar data and communication characteristics, mitigating performance degradation from heterogeneity. On this basis, a novel Cluster- Aware Multi-round Update (CAMU) strategy is proposed, which treats clusters as the basic units and adjusts the local update frequency based on the clustered contribution threshold, effectively reducing update bias and enhancing aggregation accuracy. The theoretical convergence of the CAMU strategy is rigorously validated. Meanwhile, based on the convergence upper bound, the local update frequency and transmission power of each cluster are jointly optimized to achieve an optimal balance between computation and communication resources under constrained conditions, significantly improving the convergence efficiency of FL. Experimental results demonstrate that the proposed method effectively improves the model performance of FL in heterogeneous environments and achieves a better balance between communication cost and computational load under limited resources.

View on arXiv
@article{sun2025_2505.06268,
  title={ Cluster-Aware Multi-Round Update for Wireless Federated Learning in Heterogeneous Environments },
  author={ Pengcheng Sun and Erwu Liu and Wei Ni and Kanglei Yu and Rui Wang and Abbas Jamalipour },
  journal={arXiv preprint arXiv:2505.06268},
  year={ 2025 }
}
Comments on this paper