467

Reducing Domain Gap via Style-Agnostic Networks

Abstract

Convolutional Neural Networks (CNNs) often fail to maintain their performance when they confront new test domains. This has posed substantial obstacles to real-world applications of deep learning. Recent studies suggest that one of the main causes of this problem is CNNs' inductive bias towards image styles (i.e. textures), which are highly dependent on domains, rather than contents (i.e. shapes). Motivated by this, we propose Style-Agnostic Networks (SagNets) which mitigate the style bias to generalize better under domain shift. Our experiments demonstrate that SagNets successfully reduce the style bias as well as domain discrepancy, and clarify a strong correlation between the bias and domain gap. Finally, SagNets achieve remarkable performance improvements in a wide range of cross-domain tasks, including domain generalization, unsupervised domain adaptation, and semi-supervised domain adaptation.

View on arXiv
Comments on this paper