Minimum Width of Leaky-ReLU Neural Networks for Uniform Universal Approximation

The study of universal approximation properties (UAP) for neural networks (NN) has a long history. When the network width is unlimited, only a single hidden layer is sufficient for UAP. In contrast, when the depth is unlimited, the width for UAP needs to be not less than the critical width , where and are the dimensions of the input and output, respectively. Recently, \cite{cai2022achieve} shows that a leaky-ReLU NN with this critical width can achieve UAP for functions on a compact domain , \emph{i.e.,} the UAP for . This paper examines a uniform UAP for the function class and gives the exact minimum width of the leaky-ReLU NN as , where is the additional dimensions for approximating continuous functions with diffeomorphisms via embedding. To obtain this result, we propose a novel lift-flow-discretization approach that shows that the uniform UAP has a deep connection with topological theory.
View on arXiv