ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.09884
44
13

Naive Feature Selection: a Nearly Tight Convex Relaxation for Sparse Naive Bayes

13 March 2025
Armin Askari
Alexandre d’Aspremont
L. Ghaoui
ArXivPDFHTML
Abstract

Due to its linear complexity, naive Bayes classification remains an attractive supervised learning method, especially in very large-scale settings. We propose a sparse version of naive Bayes, which can be used for feature selection. This leads to a combinatorial maximum-likelihood problem, for which we provide an exact solution in the case of binary data, or a bound in the multinomial case. We prove that our convex relaxation bounds becomes tight as the marginal contribution of additional features decreases, using a priori duality gap bounds dervied from the Shapley-Folkman theorem. We show how to produce primal solutions satisfying these bounds. Both binary and multinomial sparse models are solvable in time almost linear in problem size, representing a very small extra relative cost compared to the classical naive Bayes. Numerical experiments on text data show that the naive Bayes feature selection method is as statistically effective as state-of-the-art feature selection methods such as recursive feature elimination, l1l_1l1​-penalized logistic regression and LASSO, while being orders of magnitude faster.

View on arXiv
@article{askari2025_1905.09884,
  title={ Naive Feature Selection: a Nearly Tight Convex Relaxation for Sparse Naive Bayes },
  author={ Armin Askari and Alexandre dÁspremont and Laurent El Ghaoui },
  journal={arXiv preprint arXiv:1905.09884},
  year={ 2025 }
}
Comments on this paper