Scaling Up Sparse Support Vector Machine by Simultaneous Feature and Sample Reduction

Sparse support vector machine (SVM) is a popular classification technique that can simultaneously learn a small set of the most interpretable features and identify the support vectors. It has achieved great success in many real-world applications. However, for large-scale problems involving a huge number of samples and extremely high-dimensional features, solving sparse SVM remains challenging. By noting that sparse SVM induces sparsities in both feature and sample spaces, we propose a novel approach---that is based on accurate estimations of the primal and dual optimums of sparse SVM---to simultaneously identify the features and samples that are guaranteed to be irrelevant to the outputs. Thus, we can remove the identified samples and features from the training phase, which leads to substantial savings in both memory usage and computational cost without sacrificing accuracy. To the best of our knowledge, the proposed method is the \emph{first} \emph{static} feature and sample reduction method for sparse SVM. Experiments on both synthetic and real datasets (e.g., the kddb dataset with about 20 million of samples and 30 million of features) demonstrate that our approach significantly outperforms existing state-of-the-art methods and the speedup gained by our approach can be orders of magnitude.
View on arXiv