On Kernelization of Supervised Mahalanobis Distance Learners
This paper contains three contributions to the problem of learning a Mahalanobis distance. First, a general framework for kernelizing Mahalanobis distance learners is presented. The framework allows existing algorithms to learn a Mahalanobis distance in a feature space associated with a pre-specified kernel function. The framework is then used for kernelizing three well-known learners, namely, ``neighborhood component analysis'', ``large margin nearest neighbors'' and ``discriminant neighborhood embedding''; open problems of recent works are thus solved. Second, while the truths of representer theorems are just assumptions in previous papers related to ours, here representer theorems in the context of kernelized Mahalanobis distance learners are formally proven. Third, unlike previous works which demand cross validation to select a kernel, an inductive kernel alignment method based on quadratic programming is derived in this paper and is used to automatically select an efficient kernel function. Numerical results on various real-world datasets are presented.
View on arXiv