It has been shown that deep neural networks of a large enough width are universal approximators but they are not if the width is too small. There were several attempts to characterize the minimum width enabling the universal approximation property; however, only a few of them found the exact values. In this work, we show that the minimum width for approximation of functions from to is exactly if an activation function is ReLU-Like (e.g., ReLU, GELU, Softplus). Compared to the known result for ReLU networks, when the domain is , our result first shows that approximation on a compact domain requires smaller width than on . We next prove a lower bound on for uniform approximation using general activation functions including ReLU: if . Together with our first result, this shows a dichotomy between and uniform approximations for general activation functions and input/output dimensions.
View on arXiv