539

Testing Conditional Independence in Supervised Learning Algorithms

Abstract

We propose a general test of conditional independence. The conditional predictive impact (CPI) is a provably consistent and unbiased estimator of one or several features' association with a given outcome, conditional on a (potentially empty) reduced feature set. Building on the knockoff framework of Cand\`es et al. (2018), we develop a novel testing procedure that works in conjunction with any valid knockoff sampler, supervised learning algorithm, and loss function. The CPI can be efficiently computed for low- or high-dimensional data without any sparsity constraints. We demonstrate convergence criteria for the CPI and develop statistical inference procedures for evaluating its magnitude, significance, and precision. These tests aid in feature and model selection, extending traditional frequentist and Bayesian techniques to general supervised learning tasks. The CPI may also be applied in causal discovery to identify underlying graph structures for multivariate systems. We test our method using various algorithms, including linear regression, neural networks, random forests, and support vector machines. Empirical results show that the CPI compares favorably to alternative variable importance measures and other nonparametric tests of conditional independence on a diverse array of realand simulated datasets. Simulations confirm that our inference procedures successfully control Type I error and achieve nominal coverage probability with greater power and speed than the original knockoff filter. Our method has been implemented in an R package, cpi, which can be downloaded from https://github.com/dswatson/cpi.

View on arXiv
Comments on this paper