Bottleneck Structure in Learned Features: Low-Dimension vs Regularity Tradeoff

Previous work has shown that DNNs with large depth and -regularization are biased towards learning low-dimensional representations of the inputs, which can be interpreted as minimizing a notion of rank of the learned function , conjectured to be the Bottleneck rank. We compute finite depth corrections to this result, revealing a measure of regularity which bounds the pseudo-determinant of the Jacobian and is subadditive under composition and addition. This formalizes a balance between learning low-dimensional representations and minimizing complexity/irregularity in the feature maps, allowing the network to learn the `right' inner dimension. Finally, we prove the conjectured bottleneck structure in the learned features as : for large depths, almost all hidden representations are approximately -dimensional, and almost all weight matrices have singular values close to 1 while the others are . Interestingly, the use of large learning rates is required to guarantee an order NTK which in turns guarantees infinite depth convergence of the representations of almost all layers.
View on arXiv