729

Provably Uncertainty-Guided Universal Domain Adaptation

Abstract

Universal domain adaptation (UniDA) aims to transfer the knowledge of common classes from source domain to target domain without any prior knowledge on the label set, which requires to distinguish the unknown samples from the known ones in the target domain. Like the traditional unsupervised domain adaptation problem, the misalignment between two domains exists due to the biased and less-discriminative embedding in target domain. Recent methods proposed to complete the domain misalignment by clustering target samples with the nearest neighbors or nearest prototypes. However, it is dangerous to do so because both known and unknown samples may distribute on the edges of source clusters. Meanwhile, other existing classifier-based methods could easily produce overconfident predictions for unknown samples because the supervised objectives in source domain leads the whole model to be biased towards the common classes. Therefore, to deal with the first issue, we propose to exploit the distribution of target samples and introduce an empirical estimation of the probability of a target sample belong to the unknown class. Then, based on the estimation, we propose a novel unknown samples discovering method in the linear subspace with a δ\delta-filter to estimate the uncertainty of each target sample, which can fully exploit the relationship between the target sample and its neighbors. Moreover, for the second issue, this paper well balances the confidence values of both known and unknown samples through an uncertainty-guided margin loss. It enforces a margin to source samples to encourage a similar intra-class variance of source samples to that of unknown samples.

View on arXiv
Comments on this paper