66

Linear Independence of Generalized Neurons and Related Functions

Leyang Zhang
Main:50 Pages
5 Figures
Bibliography:1 Pages
Abstract

The linear independence of neurons plays a significant role in theoretical analysis of neural networks. Specifically, given neurons H1,...,Hn:\bRN×\bRd\bRH_1, ..., H_n: \bR^N \times \bR^d \to \bR, we are interested in the following question: when are {H1(θ1,),...,Hn(θn,)}\{H_1(\theta_1, \cdot), ..., H_n(\theta_n, \cdot)\} are linearly independent as the parameters θ1,...,θn\theta_1, ..., \theta_n of these functions vary over \bRN\bR^N. Previous works give a complete characterization of two-layer neurons without bias, for generic smooth activation functions. In this paper, we study the problem for neurons with arbitrary layers and widths, giving a simple but complete characterization for generic analytic activation functions.

View on arXiv
Comments on this paper