86
0

Provably Near-Optimal Federated Ensemble Distillation with Negligible Overhead

Abstract

Federated ensemble distillation addresses client heterogeneity by generating pseudo-labels for an unlabeled server dataset based on client predictions and training the server model using the pseudo-labeled dataset. The unlabeled server dataset can either be pre-existing or generated through a data-free approach. The effectiveness of this approach critically depends on the method of assigning weights to client predictions when creating pseudo-labels, especially in highly heterogeneous settings. Inspired by theoretical results from GANs, we propose a provably near-optimal weighting method that leverages client discriminators trained with a server-distributed generator and local datasets. Our experiments on various image classification tasks demonstrate that the proposed method significantly outperforms baselines. Furthermore, we show that the additional communication cost, client-side privacy leakage, and client-side computational overhead introduced by our method are negligible, both in scenarios with and without a pre-existing server dataset.

View on arXiv
@article{jang2025_2502.06349,
  title={ Provably Near-Optimal Federated Ensemble Distillation with Negligible Overhead },
  author={ Won-Jun Jang and Hyeon-Seo Park and Si-Hyeon Lee },
  journal={arXiv preprint arXiv:2502.06349},
  year={ 2025 }
}
Comments on this paper