ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.02667
62
0

Compositionality Unlocks Deep Interpretable Models

3 April 2025
Thomas Dooms
Ward Gauderis
Geraint A. Wiggins
José Oramas
    FAtt
    CoGe
    AI4CE
ArXivPDFHTML
Abstract

We propose χ\chiχ-net, an intrinsically interpretable architecture combining the compositional multilinear structure of tensor networks with the expressivity and efficiency of deep neural networks. χ\chiχ-nets retain equal accuracy compared to their baseline counterparts. Our novel, efficient diagonalisation algorithm, ODT, reveals linear low-rank structure in a multilayer SVHN model. We leverage this toward formal weight-based interpretability and model compression.

View on arXiv
@article{dooms2025_2504.02667,
  title={ Compositionality Unlocks Deep Interpretable Models },
  author={ Thomas Dooms and Ward Gauderis and Geraint A. Wiggins and Jose Oramas },
  journal={arXiv preprint arXiv:2504.02667},
  year={ 2025 }
}
Comments on this paper