On the emergence of simplex symmetry in the final and penultimate layers of neural network classifiers

A recent numerical study observed that neural network classifiers enjoy a large degree of symmetry in the penultimate layer. Namely, if where is a linear map and is the output of the penultimate layer of the network (after activation), then all data points in a class are mapped to a single point by and the points are located at the vertices of a regular -dimensional standard simplex in a high-dimensional Euclidean space. We explain this observation analytically in toy models for highly expressive deep neural networks. In complementary examples, we demonstrate rigorously that even the final output of the classifier is not uniform over data samples from a class if is a shallow network (or if the deeper layers do not bring the data samples into a convenient geometric configuration).
View on arXiv