ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.02515
25
0

FedSDAF: Leveraging Source Domain Awareness for Enhanced Federated Domain Generalization

5 May 2025
H. Li
Zesheng Zhou
Zhenbiao Cao
X. Li
Wei Chen
X. Zhang
    FedML
ArXivPDFHTML
Abstract

Traditional domain generalization approaches predominantly focus on leveraging target domain-aware features while overlooking the critical role of source domain-specific characteristics, particularly in federated settings with inherent data isolation. To address this gap, we propose the Federated Source Domain Awareness Framework (FedSDAF), the first method to systematically exploit source domain-aware features for enhanced federated domain generalization (FedDG). The FedSDAF framework consists of two synergistic components: the Domain-Invariant Adapter, which preserves critical domain-invariant features, and the Domain-Aware Adapter, which extracts and integrates source domain-specific knowledge using a Multihead Self-Attention mechanism (MHSA). Furthermore, we introduce a bidirectional knowledge distillation mechanism that fosters knowledge sharing among clients while safeguarding privacy. Our approach represents the first systematic exploitation of source domain-aware features, resulting in significant advancements in model generalizationthis http URLexperiments on four standard benchmarks (OfficeHome, PACS, VLCS, and DomainNet) show that our method consistently surpasses state-of-the-art federated domain generalization approaches, with accuracy gains of 5.2-13.8%. The source code is available atthis https URL.

View on arXiv
@article{li2025_2505.02515,
  title={ FedSDAF: Leveraging Source Domain Awareness for Enhanced Federated Domain Generalization },
  author={ Hongze Li and Zesheng Zhou and Zhenbiao Cao and Xinhui Li and Wei Chen and Xiaojin Zhang },
  journal={arXiv preprint arXiv:2505.02515},
  year={ 2025 }
}
Comments on this paper