10
0

Multi-Source Collaborative Style Augmentation and Domain-Invariant Learning for Federated Domain Generalization

Abstract

Federated domain generalization aims to learn a generalizable model from multiple decentralized source domains for deploying on the unseen target domain. The style augmentation methods have achieved great progress on domain generalization. However, the existing style augmentation methods either explore the data styles within isolated source domain or interpolate the style information across existing source domains under the data decentralization scenario, which leads to limited style space. To address this issue, we propose a Multi-source Collaborative Style Augmentation and Domain-invariant learning method (MCSAD) for federated domain generalization. Specifically, we propose a multi-source collaborative style augmentation module to generate data in the broader style space. Furthermore, we conduct domain-invariant learning between the original data and augmented data by cross-domain feature alignment within the same class and classes relation ensemble distillation between different classes to learn a domain-invariant model. By alternatively conducting collaborative style augmentation and domain-invariant learning, the model can generalize well on unseen target domain. Extensive experiments on multiple domain generalization datasets indicate that our method significantly outperforms the state-of-the-art federated domain generalization methods.

View on arXiv
@article{wei2025_2505.10152,
  title={ Multi-Source Collaborative Style Augmentation and Domain-Invariant Learning for Federated Domain Generalization },
  author={ Yikang Wei },
  journal={arXiv preprint arXiv:2505.10152},
  year={ 2025 }
}
Comments on this paper