554
v1v2v3 (latest)

A Quotient Homology Theory of Representation in Neural Networks

Main:13 Pages
9 Figures
Bibliography:6 Pages
3 Tables
Appendix:8 Pages
Abstract

Previous research has proven that the set of maps implemented by neural networks with a ReLU activation function is identical to the set of piecewise linear continuous maps. Furthermore, such networks induce a hyperplane arrangement splitting the input domain of the network into convex polyhedra GJG_J over which a network Φ\Phi operates in an affine manner.In this work, we leverage these properties to define an equivalence class Φ\sim_\Phi on top of an input dataset, which can be split into two sets related to the local rank of ΦJ\Phi_J and the intersections ImΦJi\cap \text{Im}\Phi_{J_i}. We refer to the latter as the \textit{overlap decomposition} OΦ\mathcal{O}_\Phi and prove that if the intersections between each polyhedron and an input manifold are convex, the homology groups of neural representations are isomorphic to quotient homology groups Hk(Φ(M))Hk(M/OΦ)H_k(\Phi(\mathcal{M})) \simeq H_k(\mathcal{M}/\mathcal{O}_\Phi). This lets us intrinsically calculate the Betti numbers of neural representations without the choice of an external metric. We develop methods to numerically compute the overlap decomposition through linear programming and a union-find algorithm.Using this framework, we perform several experiments on toy datasets showing that, compared to standard persistent homology, our overlap homology-based computation of Betti numbers tracks purely topological rather than geometric features. Finally, we study the evolution of the overlap decomposition during training on several classification problems while varying network width and depth and discuss some shortcomings of our method.

View on arXiv
Comments on this paper