26
1

Deep Transfer Learning: Model Framework and Error Analysis

Abstract

This paper presents a framework for deep transfer learning, which aims to leverage information from multi-domain upstream data with a large number of samples nn to a single-domain downstream task with a considerably smaller number of samples mm, where mnm \ll n, in order to enhance performance on downstream task. Our framework offers several intriguing features. First, it allows the existence of both shared and domain-specific features across multi-domain data and provides a framework for automatic identification, achieving precise transfer and utilization of information. Second, the framework explicitly identifies upstream features that contribute to downstream tasks, establishing clear relationships between upstream domains and downstream tasks, thereby enhancing interpretability. Error analysis shows that our framework can significantly improve the convergence rate for learning Lipschitz functions in downstream supervised tasks, reducing it from O~(m12(d+2)+n12(d+2))\tilde{O}(m^{-\frac{1}{2(d+2)}}+n^{-\frac{1}{2(d+2)}}) ("no transfer") to O~(m12(d+3)+n12(d+2))\tilde{O}(m^{-\frac{1}{2(d^*+3)}} + n^{-\frac{1}{2(d+2)}}) ("partial transfer"), and even to O~(m1/2+n12(d+2))\tilde{O}(m^{-1/2}+n^{-\frac{1}{2(d+2)}}) ("complete transfer"), where ddd^* \ll d and dd is the dimension of the observed data. Our theoretical findings are supported by empirical experiments on image classification and regression datasets.

View on arXiv
Comments on this paper