Exponential Lower Bounds for Threshold Circuits of Sub-Linear Depth and
Energy
In this paper, we investigate computational power of threshold circuits and other theoretical models of neural networks in terms of the following four complexity measures: size (the number of gates), depth, weight and energy. Here the energy complexity of a circuit measures sparsity of their computation, and is defined as the maximum number of gates outputting non-zero values taken over all the input assignments. As our main result, we prove that any threshold circuit of size , depth , energy and weight satisfies , where is the rank of the communication matrix of a -variable Boolean function that computes. Thus, such a threshold circuit is able to compute only a Boolean function of which communication matrix has rank bounded by a product of logarithmic factors of and linear factors of . This implies an exponential lower bound on the size of even sublinear-depth threshold circuit if energy and weight are sufficiently small. For other models of neural networks such as a discretized ReLE circuits and decretized sigmoid circuits, we prove that a similar inequality also holds for a discretized circuit : .
View on arXiv