19

A Gap Between Decision Trees and Neural Networks

Akash Kumar
Main:19 Pages
10 Figures
Bibliography:3 Pages
Appendix:23 Pages
Abstract

We study when geometric simplicity of decision boundaries, used here as a notion of interpretability, can conflict with accurate approximation of axis-aligned decision trees by shallow neural networks. Decision trees induce rule-based, axis-aligned decision regions (finite unions of boxes), whereas shallow ReLU networks are typically trained as score models whose predictions are obtained by thresholding. We analyze the infinite-width, bounded-norm, single-hidden-layer ReLU class through the Radon total variation (RTV\mathrm{R}\mathrm{TV}) seminorm, which controls the geometric complexity of level sets.

View on arXiv
Comments on this paper