NBDT: Neural-Backed Decision Trees
Machine learning applications such as finance and medicine demand accurate and justifiable predictions, barring most deep learning methods from use. In response, previous work combines decision trees with deep learning, yielding models that (1) sacrifice interpretability to maintain accuracy or (2) underperform modern neural networks to maintain interpretability. We forgo this dilemma by proposing Neural-Backed Decision Trees (NBDTs), modified hierarchical classifiers that use trees constructed in weight-space. Our NBDTs achieve (1) interpretability and (2) neural network accuracy: We preserve interpretable properties -- e.g., leaf purity and a non-ensembled model -- and demonstrate interpretability of model predictions both qualitatively and quantitatively. Furthermore, NBDTs match state-of-the-art neural networks on CIFAR10, CIFAR100, TinyImageNet, and ImageNet to within 1-2%. This yields state-of-the-art interpretable models on ImageNet, with NBDTs besting all decision-tree-based methods by ~14% to attain 75.30% top-1 accuracy. Code and pretrained NBDTs are at https://github.com/alvinwan/neural-backed-decision-trees.
View on arXiv