Overcoming Negative Transfer: A Survey
- AAML
Transfer learning (TL) tries to utilize data or knowledge from one or more source domains to facilitate the learning in a target domain. It is particularly useful when the target domain has few or no labeled data, due to annotation expense, privacy concerns, etc. Unfortunately, the effectiveness of TL is not always guaranteed. Negative transfer (NT), i.e., the source domain data/knowledge cause reduced learning performance in the target domain, has been a long-standing and challenging problem in TL. Various approaches to overcome NT have been proposed in the literature. However, there has not been a systematic survey on overcoming NT. This paper fills the gap, by categorizing and reviewing near 100 approaches for combating NT, from four perspectives: source data quality, target data quality, domain divergence, and integrated algorithms. NT in related fields, e.g., multi-task learning, multilingual models, and lifelong learning, is also discussed.
View on arXiv