SpicyMKL

Abstract
We propose a new optimization algorithm for Multiple Kernel Learning (MKL) with general convex loss functions. The proposed algorithm is a proximal minimization method that utilizes the "smoothed" dual objective function and converges super-linearly. The sparsity of the intermediate solution plays a crucial role for the efficiency of the proposed algorithm. Consequently our algorithm scales well with increasing number of kernels. Experimental results show that our algorithm is favorable against existing methods especially when the number of kernels is large (> 1000).
View on arXivComments on this paper