v1v2 (latest)
Kolmogorov Width Decay and Poor Approximators in Machine Learning:
Shallow Neural Networks, Random Feature Models and Neural Tangent Kernels
Abstract
We establish a scale separation of Kolmogorov width type between subspaces of a given Banach space under the condition that a sequence of linear maps converges much faster on one of the subspaces. The general technique is then applied to show that reproducing kernel Hilbert spaces are poor -approximators for the class of two-layer neural networks in high dimension, and that multi-layer networks with small path norm are poor approximators for certain Lipschitz functions, also in the -topology.
View on arXivComments on this paper
