ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1607.06996
90
38
v1v2v3v4v5v6 (latest)

Scaling Up Sparse Support Vector Machine by Simultaneous Feature and Sample Reduction

24 July 2016
Weizhong Zhang
Bin Hong
Wei Liu
Jieping Ye
Deng Cai
Xiaofei He
ArXiv (abs)PDFHTML
Abstract

Sparse support vector machine (SVM) is a popular classification technique that can simultaneously learn a small set of the most interpretable features. It has achieved great success in many real-world applications. However, for large-scale problems involving a huge number of samples and extremely high-dimensional features, solving sparse SVM remains challenging. By noting that sparse SVM induces sparsities in both feature and sample spaces, we propose a novel approach---that is based on accurate estimations of the primal and dual optimums of sparse SVM---to simultaneously identify the features and samples that are guaranteed to be irrelevant to the outputs. Thus, we can remove the identified samples and features from the training phase, which may lead to substantial savings in both memory usage and computational cost without sacrificing accuracy. To the best of our knowledge, the proposed method is the \emph{first} \emph{static} feature and sample reduction method for sparse SVM. Experiments on both synthetic and real data sets (e.g., the kddb data set with about 20 million of samples and 30 million of features) demonstrate that our approach significantly outperforms existing state-of-the-art methods and the speedup gained by our approach can be orders of magnitude.

View on arXiv
Comments on this paper