A Relative Homology Theory of Representation in Neural Networks
Main:13 Pages
9 Figures
Bibliography:6 Pages
3 Tables
Appendix:8 Pages
Abstract
Previous research has proven that the set of maps implemented by neural networks with a ReLU activation function is identical to the set of piecewise linear continuous maps. Furthermore, such networks induce a hyperplane arrangement splitting the input domain into convex polyhedra over which the network operates in an affine manner.
View on arXivComments on this paper
