Feature Distribution Matching for Federated Domain Generalization
- OOD
Multi-source domain adaptation has been intensively studied. The distribution shift in features inherent to specific domains brings in negative transfer degrading a model's generality to unseen tasks. In Federated Learning (FL), to leverage knowledge from different domains, learned model parameters are shared to train a global model. Nonetheless, the data confidentiality of FL hinders the effectiveness of traditional domain adaptation methods that require prior knowledge of different domain data. To this end, we propose a new federated domain generation method called Federated Knowledge Alignment (FedKA). FedKA leverages feature distribution matching in a global workspace such that the global model can learn domain-invariant client features under the constraint of unknown domain data. A federated voting mechanism is devised to generate target domain pseudo-labels based on the consensus from clients facilitating global model fine-tuning. We performed extensive experiments including an ablation study to evaluate the effectiveness of the proposed method in both image classification tasks and a text classification task based on model architectures with different complexities. The empirical results show that FedKA can achieve performance gains of 8.8% and 3.5% in Digit-Five and Office-Caltech10, respectively, and a gain of 0.7% in Amazon Review with extremely limited training data.
View on arXiv