ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.10931
25
30

Domain Generalization under Conditional and Label Shifts via Variational Bayesian Inference

22 July 2021
Xiaofeng Liu
Bo Hu
Linghao Jin
Xu Han
Fangxu Xing
J. Ouyang
Jun Lu
G. El Fakhri
Jonghye Woo
    OOD
    BDL
ArXivPDFHTML
Abstract

In this work, we propose a domain generalization (DG) approach to learn on several labeled source domains and transfer knowledge to a target domain that is inaccessible in training. Considering the inherent conditional and label shifts, we would expect the alignment of p(x∣y)p(x|y)p(x∣y) and p(y)p(y)p(y). However, the widely used domain invariant feature learning (IFL) methods relies on aligning the marginal concept shift w.r.t. p(x)p(x)p(x), which rests on an unrealistic assumption that p(y)p(y)p(y) is invariant across domains. We thereby propose a novel variational Bayesian inference framework to enforce the conditional distribution alignment w.r.t. p(x∣y)p(x|y)p(x∣y) via the prior distribution matching in a latent space, which also takes the marginal label shift w.r.t. p(y)p(y)p(y) into consideration with the posterior alignment. Extensive experiments on various benchmarks demonstrate that our framework is robust to the label shift and the cross-domain accuracy is significantly improved, thereby achieving superior performance over the conventional IFL counterparts.

View on arXiv
Comments on this paper