39
v1v2 (latest)

Efficient Federated Learning with Heterogeneous Data and Adaptive Dropout

ACM Transactions on Knowledge Discovery from Data (TKDD), 2025
Ji Liu
Beichen Ma
Qiaolin Yu
Ruoming Jin
Jingbo Zhou
Yang Zhou
Huaiyu Dai
Haixun Wang
Dejing Dou
Patrick Valduriez
Main:25 Pages
7 Figures
Bibliography:4 Pages
7 Tables
Abstract

Federated Learning (FL) is a promising distributed machine learning approach that enables collaborative training of a global model using multiple edge devices. The data distributed among the edge devices is highly heterogeneous. Thus, FL faces the challenge of data distribution and heterogeneity, where non-Independent and Identically Distributed (non-IID) data across edge devices may yield in significant accuracy drop. Furthermore, the limited computation and communication capabilities of edge devices increase the likelihood of stragglers, thus leading to slow model convergence. In this paper, we propose the FedDHAD FL framework, which comes with two novel methods: Dynamic Heterogeneous model aggregation (FedDH) and Adaptive Dropout (FedAD). FedDH dynamically adjusts the weights of each local model within the model aggregation process based on the non-IID degree of heterogeneous data to deal with the statistical data heterogeneity. FedAD performs neuron-adaptive operations in response to heterogeneous devices to improve accuracy while achieving superb efficiency. The combination of these two methods makes FedDHAD significantly outperform state-of-the-art solutions in terms of accuracy (up to 6.7% higher), efficiency (up to 2.02 times faster), and computation cost (up to 15.0% smaller).

View on arXiv
Comments on this paper