A Numerical Investigation of the Minimum Width of a Neural Network
Abstract
Neural network width and depth are fundamental aspects of network topology. Universal approximation theorems provide that with increasing width or depth, there exists a neural network that approximates a function arbitrarily well. These theorems assume requirements, such as infinite data, that must be discretized in practice. Through numerical experiments, we seek to test the lower bounds established by Hanin in 2017.
View on arXivComments on this paper
