Deep Copula Classifier: Theory, Consistency, and Empirical Evaluation
We present the Deep Copula Classifier (DCC), a class-conditional generative model that separates marginal estimation from dependence modeling using neural copula densities. DCC is interpretable, Bayes-consistent, and achieves excess-risk for -smooth copulas. In a controlled two-class study with strong dependence (), DCC learns Bayes-aligned decision regions. With oracle or pooled marginals, it nearly reaches the best possible performance (accuracy ; ROC-AUC ). As expected, per-class KDE marginals perform less well (accuracy ; ROC-AUC ; PR-AUC ). On the Pima Indians Diabetes dataset, calibrated DCC () achieves accuracy , ROC-AUC , and PR-AUC , outperforming Logistic Regression, SVM (RBF), and Naive Bayes, and matching Logistic Regression on the lowest Expected Calibration Error (ECE). Random Forest is also competitive (accuracy ; ROC-AUC ; PR-AUC ). Directly modeling feature dependence yields strong, well-calibrated performance with a clear probabilistic interpretation, making DCC a practical, theoretically grounded alternative to independence-based classifiers.
View on arXiv