Transformation Function Based Methods for Model Shift

Transfer learning techniques are often used when one tries to adapt a model learned from a source domain with abundant labeled samples to the target domain with limited labeled samples. In this paper, we consider the regression problem under model shift condition, i.e., regression functions are different but related in the source and target domains. We approach this problem through the use of transformation functions which characterize the relation between the source and the target domain. These transformation functions are able to transform the original problem of learning the complicated regression function of target domain into a problem of learning a simple auxiliary function. This transformation function based technique includes some previous works as special cases, but the class we propose is significantly more general. In this work we consider two widely used non-parametric estimators, Kernel Smoothing (KS) and Kernel Ridge Regression (KRR) for this setting and show improved statistical rates for excess risk than non-transfer learning. Through an -cover technique, we show that we can find the best transformation function a function class. Lastly, experiments on synthesized, robotics and neural imaging data demonstrate the effectiveness of our framework.
View on arXiv