197

Efficient Training for Positive Unlabeled Learning

IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2016
Abstract

Positive unlabeled learning (PU learning) refers to the task of learning a binary classifier from only positive and unlabeled data [1]. This problem arises in various practical applications, like in multimedia/information retrieval [2], where the goal is to find samples in an unlabeled data set that are similar to the samples provided by a user, as well as for applications of outlier detection [3] or semi-supervised novelty detection [4]. The works in [5] and [6] have recently shown that PU learning can be formulated as a risk minimization problem. In particular, expressing the risk with a convex loss function, like the double Hinge loss, allows to achieve better classification performance than those ones obtained by using other loss functions. Nevertheless, the works have only focused in analysing the generalization performance obtained by using different loss functions, without considering the efficiency of training. In that regard, we propose a novel algorithm, which optimizes efficiently the risk minimization problem stated in [6]. In particular, we show that the storage complexity of our approach scales only linearly with the number of training samples. Concerning the training time, we show experimentally on different benchmark data sets that our algorithm exhibits the same quadratic behaviour of existing optimization algorithms implemented in highly-efficient libraries. The rest of the paper is organized as follows. In Section 2 we review the formulation of the PU learning problem and we enunciate for the first time the Representer theorem. In Section 3 we derive the convex formulation of the problem by using the double Hinge loss function. In Section 4 we propose an algorithm to solve the optimization problem and we finally conclude with the last section by describing the experimental evaluation.

View on arXiv
Comments on this paper