Constructive Universal Approximation and Finite Sample Memorization by Narrow Deep ReLU Networks
We present a fully constructive analysis of deep ReLU neural networks for classification and function approximation tasks. First, we prove that any dataset with distinct points in and output classes can be exactly classified using a multilayer perceptron (MLP) of width and depth at most , with all network parameters constructed explicitly. This result is sharp with respect to width and is interpreted through the lens of simultaneous or ensemble controllability in discrete nonlinear dynamics.Second, we show that these explicit constructions yield uniform bounds on the parameter norms and, in particular, provide upper estimates for minimizers of standard regularized training loss functionals in supervised learning. As the regularization parameter vanishes, the trained networks converge to exact classifiers with bounded norm, explaining the effectiveness of overparameterized training in the small-regularization regime.We also prove a universal approximation theorem in for any bounded domain and , using MLPs of fixed width . The proof is constructive, geometrically motivated, and provides explicit estimates on the network depth when the target function belongs to the Sobolev space . We also extend the approximation and depth estimation results to for any .Our results offer a unified and interpretable framework connecting controllability, expressivity, and training dynamics in deep neural networks.
View on arXiv