26
4

Geometric structure of Deep Learning networks and construction of global L2{\mathcal L}^2 minimizers

Abstract

In this paper, we explicitly determine local and global minimizers of the L2\mathcal{L}^2 cost function in underparametrized Deep Learning (DL) networks; our main goal is to shed light on their geometric structure and properties. We accomplish this by a direct construction, without invoking the gradient descent flow at any point of this work. We specifically consider LL hidden layers, a ReLU ramp activation function, an L2\mathcal{L}^2 Schatten class (or Hilbert-Schmidt) cost function, input and output spaces RQ\mathbb{R}^Q with equal dimension Q1Q\geq1, and hidden layers also defined on RQ\mathbb{R}^{Q}; the training inputs are assumed to be sufficiently clustered. The training input size NN can be arbitrarily large - thus, we are considering the underparametrized regime. More general settings are left to future work. We construct an explicit family of minimizers for the global minimum of the cost function in the case LQL\geq Q, which we show to be degenerate. Moreover, we determine a set of 2Q12^Q-1 distinct degenerate local minima of the cost function. In the context presented here, the concatenation of hidden layers of the DL network is reinterpreted as a recursive application of a {\em truncation map} which "curates" the training inputs by minimizing their noise to signal ratio.

View on arXiv
Comments on this paper