190
v1v2 (latest)

Exponential Lower Bounds for Threshold Circuits of Sub-Linear Depth and Energy

International Symposium on Mathematical Foundations of Computer Science (MFCS), 2021
Abstract

In this paper, we investigate computational power of threshold circuits and other theoretical models of neural networks in terms of the following four complexity measures: size (the number of gates), depth, weight and energy. Here the energy complexity of a circuit measures sparsity of their computation, and is defined as the maximum number of gates outputting non-zero values taken over all the input assignments. As our main result, we prove that any threshold circuit CC of size ss, depth dd, energy ee and weight ww satisfies log(rk(MC))ed(logs+logw+logn)\log (rk(M_C)) \le ed (\log s + \log w + \log n), where rk(MC)rk(M_C) is the rank of the communication matrix MCM_C of a 2n2n-variable Boolean function that CC computes. Thus, such a threshold circuit CC is able to compute only a Boolean function of which communication matrix has rank bounded by a product of logarithmic factors of s,ws,w and linear factors of d,ed,e. This implies an exponential lower bound on the size of even sublinear-depth threshold circuit if energy and weight are sufficiently small. For other models of neural networks such as a discretized ReLE circuits and decretized sigmoid circuits, we prove that a similar inequality also holds for a discretized circuit CC: rk(MC)=O(ed(logs+logw+logn)3)rk(M_C) = O(ed(\log s + \log w + \log n)^3).

View on arXiv
Comments on this paper