42
0

Low-rank bias, weight decay, and model merging in neural networks

Abstract

We explore the low-rank structure of the weight matrices in neural networks originating from training with Gradient Descent (GD) and Gradient Flow (GF) with L2L2 regularization (also known as weight decay). We show several properties of GD-trained deep neural networks, induced by L2L2 regularization. In particular, for a stationary point of GD we show alignment of the parameters and the gradient, norm preservation across layers, and low-rank bias: properties previously known in the context of GF solutions. Experiments show that the assumptions made in the analysis only mildly affect the observations. In addition, we investigate a multitask learning phenomenon enabled by L2L2 regularization and low-rank bias. In particular, we show that if two networks are trained, such that the inputs in the training set of one network are approximately orthogonal to the inputs in the training set of the other network, the new network obtained by simply summing the weights of the two networks will perform as well on both training sets as the respective individual networks. We demonstrate this for shallow ReLU neural networks trained by GD, as well as deep linear and deep ReLU networks trained by GF.

View on arXiv
@article{kuzborskij2025_2502.17340,
  title={ Low-rank bias, weight decay, and model merging in neural networks },
  author={ Ilja Kuzborskij and Yasin Abbasi Yadkori },
  journal={arXiv preprint arXiv:2502.17340},
  year={ 2025 }
}
Comments on this paper