Lazy Transfer Learning
The concept of Transfer Learning frequently rises when one has already acquired a general-purpose classifier (e.g. human hand-writing recognizer) and want to enhance it on a similar but slightly different task (John's hand-writing recognition). In this paper, we focus on a "lazy" setting where an accurate general purpose probabilistic classifier is already given and the transfer algorithm is executed afterwards. We propose a novel methodology to estimate the class-posterior ratio using a sparse parametric model. The new classifier therefore is obtained by multiplying the existing classifier with the approximated posterior ratio. We show the proposed method can achieve promising results through numerical and real-world experiments of text corpus and hand-writing digits classification.
View on arXiv