17
0

Neural Multivariate Regression: Qualitative Insights from the Unconstrained Feature Model

Abstract

The Unconstrained Feature Model (UFM) is a mathematical framework that enables closed-form approximations for minimal training loss and related performance measures in deep neural networks (DNNs). This paper leverages the UFM to provide qualitative insights into neural multivariate regression, a critical task in imitation learning, robotics, and reinforcement learning. Specifically, we address two key questions: (1) How do multi-task models compare to multiple single-task models in terms of training performance? (2) Can whitening and normalizing regression targets improve training performance? The UFM theory predicts that multi-task models achieve strictly smaller training MSE than multiple single-task models when the same or stronger regularization is applied to the latter, and our empirical results confirm these findings. Regarding whitening and normalizing regression targets, the UFM theory predicts that they reduce training MSE when the average variance across the target dimensions is less than one, and our empirical results once again confirm these findings. These findings highlight the UFM as a powerful framework for deriving actionable insights into DNN design and data pre-processing strategies.

View on arXiv
@article{andriopoulos2025_2505.09308,
  title={ Neural Multivariate Regression: Qualitative Insights from the Unconstrained Feature Model },
  author={ George Andriopoulos and Soyuj Jung Basnet and Juan Guevara and Li Guo and Keith Ross },
  journal={arXiv preprint arXiv:2505.09308},
  year={ 2025 }
}
Comments on this paper