308

Geometric structure of Deep Learning networks and construction of global L2{\mathcal L}^2 minimizers

Abstract

In this paper, we provide a geometric interpretation of the structure of Deep Learning (DL) networks, characterized by LL hidden layers, a ramp activation function, an L2{\mathcal L}^2 Schatten class (or Hilbert-Schmidt) cost function, and input and output spaces RQ{\mathbb R}^Q with equal dimension Q1Q\geq1. The hidden layers are defined on spaces RQ{\mathbb R}^{Q}, as well. We apply our recent results on shallow neural networks to construct an explicit family of minimizers for the global minimum of the cost function in the case LQL\geq Q, which we show to be degenerate. In the context presented here, the hidden layers of the DL network "curate" the training inputs by recursive application of a truncation map that minimizes the noise to signal ratio of the training inputs. Moreover, we determine a set of 2Q12^Q-1 distinct degenerate local minima of the cost function.

View on arXiv
Comments on this paper