11
13

Naive Feature Selection: a Nearly Tight Convex Relaxation for Sparse Naive Bayes

Abstract

Due to its linear complexity, naive Bayes classification remains an attractive supervised learning method, especially in very large-scale settings. We propose a sparse version of naive Bayes, which can be used for feature selection. This leads to a combinatorial maximum-likelihood problem, for which we provide an exact solution in the case of binary data, or a bound in the multinomial case. We prove that our convex relaxation bounds becomes tight as the marginal contribution of additional features decreases, using a priori duality gap bounds dervied from the Shapley-Folkman theorem. We show how to produce primal solutions satisfying these bounds. Both binary and multinomial sparse models are solvable in time almost linear in problem size, representing a very small extra relative cost compared to the classical naive Bayes. Numerical experiments on text data show that the naive Bayes feature selection method is as statistically effective as state-of-the-art feature selection methods such as recursive feature elimination, l1l_1-penalized logistic regression and LASSO, while being orders of magnitude faster.

View on arXiv
@article{askari2025_1905.09884,
  title={ Naive Feature Selection: a Nearly Tight Convex Relaxation for Sparse Naive Bayes },
  author={ Armin Askari and Alexandre dÁspremont and Laurent El Ghaoui },
  journal={arXiv preprint arXiv:1905.09884},
  year={ 2025 }
}
Comments on this paper