Deep Neural Networks: Multi-Classification and Universal Approximation

We demonstrate that a ReLU deep neural network with a width of and a depth of layers can achieve finite sample memorization for any dataset comprising elements in , where and classes, thereby ensuring accurate classification. By modeling the neural network as a time-discrete nonlinear dynamical system, we interpret the memorization property as a problem of simultaneous or ensemble controllability. This problem is addressed by constructing the network parameters inductively and explicitly, bypassing the need for training or solving any optimization problem. Additionally, we establish that such a network can achieve universal approximation in , where is a bounded subset of and , using a ReLU deep neural network with a width of . We also provide depth estimates for approximating functions and width estimates for approximating for . Our proofs are constructive, offering explicit values for the biases and weights involved.
View on arXiv