81
0

A Relative Homology Theory of Representation in Neural Networks

Abstract

Previous research has proven that the set of maps implemented by neural networks with a ReLU activation function is identical to the set of piecewise linear continuous maps. Furthermore, such networks induce a hyperplane arrangement splitting the input domain into convex polyhedra GJG_J over which the network Φ\Phi operates in an affine manner.In this work, we leverage these properties to define the equivalence class of inputs Φ\sim_\Phi, which can be split into two sets related to the local rank of ΦJ\Phi_J and the intersections ImΦJi\cap \text{Im}\Phi_{J_i}. We refer to the latter as the overlap decomposition OΦO_\Phi and prove that if the intersections between each polyhedron and the input manifold are convex, the homology groups of neural representations are isomorphic to relative homology groups Hk(Φ(M))Hk(M,OΦ)H_k(\Phi(M)) \simeq H_k(M,O_\Phi). This lets us compute Betti numbers without the choice of an external metric. We develop methods to numerically compute the overlap decomposition through linear programming and a union-find algorithm.Using this framework, we perform several experiments on toy datasets showing that, compared to standard persistent homology, our relative homology-based computation of Betti numbers tracks purely topological rather than geometric features. Finally, we study the evolution of the overlap decomposition during training on various classification problems while varying network width and depth and discuss some shortcomings of our method.

View on arXiv
@article{beshkov2025_2502.01360,
  title={ A Relative Homology Theory of Representation in Neural Networks },
  author={ Kosio Beshkov },
  journal={arXiv preprint arXiv:2502.01360},
  year={ 2025 }
}
Comments on this paper