259

Adversarial Representation Learning for Domain Adaptation

Abstract

Domain adaptation aims at generalizing a high performance learner to a target domain via utilizing the knowledge distilled from a source domain, which has a different but related data distribution. One type of domain adaptation solutions is to learn feature representations invariant to the change of domains but is discriminative for predicting the target. Recently, the generative adversarial nets (GANs) are widely studied to learn a generator to approximate the true data distribution by trying to fool the adversarial discriminator in a minimax game setting. Inspired by GANs, we propose a novel Adversarial Representation learning approach for Domain Adaptation (ARDA) to learn high-level feature representations that are both domain-invariant and target-discriminative to tackle the cross-domain classification problem. Specifically, this approach takes advantage of the differential property of Wasserstein distance to measure distribution divergence by incorporating Wasserstein GAN. Our architecture constitutes three parts: a feature generator to generate the desired features from inputs of two domains, a critic to evaluate the Wasserstein distance based on the generated features and an adaptive classifier to accomplish the final classification task. Empirical studies on 4 common domain adaptation datasets demonstrate that our proposed ARDA outperforms the state-of-the-art domain-invariant feature learning approaches.

View on arXiv
Comments on this paper