92
v1v2v3 (latest)

Interpretable global minima of deep ReLU neural networks on sequentially separable data

Main:29 Pages
3 Figures
Bibliography:2 Pages
Abstract

We explicitly construct zero loss neural network classifiers. We write the weight matrices and bias vectors in terms of cumulative parameters, which determine truncation maps acting recursively on input space. The configurations for the training data considered are (i) sufficiently small, well separated clusters corresponding to each class, and (ii) equivalence classes which are sequentially linearly separable. In the best case, for QQ classes of data in RM\mathbb{R}^M, global minimizers can be described with Q(M+2)Q(M+2) parameters.

View on arXiv
Comments on this paper