ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.09884
292
13
v1v2v3 (latest)

Naive Feature Selection: a Nearly Tight Convex Relaxation for Sparse Naive Bayes

Mathematics of Operations Research (MOR), 2019
13 March 2025
Armin Askari
Alexandre d’Aspremont
L. Ghaoui
ArXiv (abs)PDFHTML
Main:1 Pages
4 Figures
4 Tables
Appendix:22 Pages
Abstract

Due to its linear complexity, naive Bayes classification remains an attractive supervised learning method, especially in very large-scale settings. We propose a sparse version of naive Bayes, which can be used for feature selection. This leads to a combinatorial maximum-likelihood problem, for which we provide an exact solution in the case of binary data, or a bound in the multinomial case. We prove that our convex relaxation bounds becomes tight as the marginal contribution of additional features decreases, using a priori duality gap bounds dervied from the Shapley-Folkman theorem. We show how to produce primal solutions satisfying these bounds. Both binary and multinomial sparse models are solvable in time almost linear in problem size, representing a very small extra relative cost compared to the classical naive Bayes. Numerical experiments on text data show that the naive Bayes feature selection method is as statistically effective as state-of-the-art feature selection methods such as recursive feature elimination, l1l_1l1​-penalized logistic regression and LASSO, while being orders of magnitude faster.

View on arXiv
Comments on this paper