The Effects of Multi-Task Learning on ReLU Neural Network Functions

This paper studies the properties of solutions to multi-task shallow ReLU neural network learning problems, wherein the network is trained to fit a dataset with minimal sum of squared weights. Remarkably, the solutions learned for each individual task resemble those obtained by solving a kernel regression problem, revealing a novel connection between neural networks and kernel methods. It is known that single-task neural network learning problems are equivalent to a minimum norm interpolation problem in a non-Hilbertian Banach space, and that the solutions of such problems are generally non-unique. In contrast, we prove that the solutions to univariate-input, multi-task neural network interpolation problems are almost always unique, and coincide with the solution to a minimum-norm interpolation problem in a Sobolev (Reproducing Kernel) Hilbert Space. We also demonstrate a similar phenomenon in the multivariate-input case; specifically, we show that neural network learning problems with large numbers of tasks are approximately equivalent to an (Hilbert space) minimization problem over a fixed kernel determined by the optimal neurons.
View on arXiv@article{nakhleh2025_2410.21696, title={ The Effects of Multi-Task Learning on ReLU Neural Network Functions }, author={ Julia Nakhleh and Joseph Shenouda and Robert D. Nowak }, journal={arXiv preprint arXiv:2410.21696}, year={ 2025 } }