v1v2 (latest)
Training Neural Networks for Modularity aids Interpretability
Main:4 Pages
7 Figures
Bibliography:2 Pages
1 Tables
Appendix:3 Pages
Abstract
An approach to improve network interpretability is via clusterability, i.e., splitting a model into disjoint clusters that can be studied independently. We find pretrained models to be highly unclusterable and thus train models to be more modular using an ``enmeshment loss'' function that encourages the formation of non-interacting clusters. Using automated interpretability measures, we show that our method finds clusters that learn different, disjoint, and smaller circuits for CIFAR-10 labels. Our approach provides a promising direction for making neural networks easier to interpret.
View on arXivComments on this paper
