High-dimensional quadratic classifiers in non-sparse settings

We consider high-dimensional quadratic classifiers in non-sparse settings. The target of classification rules is not Bayes error rates in the context. The classifier based on the Mahalanobis distance does not always give a preferable performance even when the sample sizes grow to infinity and the population distributions are assumed Gaussian, having known covariance matrices. The quadratic classifiers proposed in this paper draw information effectively about heteroscedasticity through the difference of parameters related to the expanding covariance matrices. We show that the quadratic classifiers hold consistency properties in which misclassification rates tend to zero as the dimension goes to infinity under non-sparse conditions. We verify that the quadratic classifiers are asymptotically distributed as a normal distribution when the dimension goes to infinity, also under certain conditions. We discuss feature selection and sparse inverse covariance matrix estimation for further evaluation of misclassification rates to give guidelines for the choice of the classifiers.
View on arXiv