ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.13469
10
73

Adversarial Unsupervised Domain Adaptation with Conditional and Label Shift: Infer, Align and Iterate

28 July 2021
Xiaofeng Liu
Zhenhua Guo
Site Li
Fangxu Xing
J. You
C.-C. Jay Kuo
G. El Fakhri
Jonghye Woo
ArXivPDFHTML
Abstract

In this work, we propose an adversarial unsupervised domain adaptation (UDA) approach with the inherent conditional and label shifts, in which we aim to align the distributions w.r.t. both p(x∣y)p(x|y)p(x∣y) and p(y)p(y)p(y). Since the label is inaccessible in the target domain, the conventional adversarial UDA assumes p(y)p(y)p(y) is invariant across domains, and relies on aligning p(x)p(x)p(x) as an alternative to the p(x∣y)p(x|y)p(x∣y) alignment. To address this, we provide a thorough theoretical and empirical analysis of the conventional adversarial UDA methods under both conditional and label shifts, and propose a novel and practical alternative optimization scheme for adversarial UDA. Specifically, we infer the marginal p(y)p(y)p(y) and align p(x∣y)p(x|y)p(x∣y) iteratively in the training, and precisely align the posterior p(y∣x)p(y|x)p(y∣x) in testing. Our experimental results demonstrate its effectiveness on both classification and segmentation UDA, and partial UDA.

View on arXiv
Comments on this paper