We propose -net, an intrinsically interpretable architecture combining the compositional multilinear structure of tensor networks with the expressivity and efficiency of deep neural networks. -nets retain equal accuracy compared to their baseline counterparts. Our novel, efficient diagonalisation algorithm, ODT, reveals linear low-rank structure in a multilayer SVHN model. We leverage this toward formal weight-based interpretability and model compression.
View on arXiv@article{dooms2025_2504.02667, title={ Compositionality Unlocks Deep Interpretable Models }, author={ Thomas Dooms and Ward Gauderis and Geraint A. Wiggins and Jose Oramas }, journal={arXiv preprint arXiv:2504.02667}, year={ 2025 } }