Circuit Complexity of Visual Search
We study computational hardness of feature and conjunction search through the lens of circuit complexity. Let (resp., ) be Boolean variables each of which takes the value one if and only if a neuron at place detects a feature (resp., another feature). We then simply formulate the feature and conjunction search as Boolean functions and , respectively. We employ a threshold circuit or a discretized circuit (such as a sigmoid circuit or a ReLU circuit with discretization) as our models of neural networks, and consider the following four computational resources: [i] the number of neurons (size), [ii] the number of levels (depth), [iii] the number of active neurons outputting non-zero values (energy), and [iv] synaptic weight resolution (weight). We first prove that any threshold circuit of size , depth , energy and weight satisfies , where is the rank of the communication matrix of a -variable Boolean function that computes. Since has rank , we have . Thus, an exponential lower bound on the size of even sublinear-depth threshold circuits exists if the energy and weight are sufficiently small. Since is computable independently of , our result suggests that computational capacity for the feature and conjunction search are different. We also show that the inequality is tight up to a constant factor if . We next show that a similar inequality holds for any discretized circuit. Thus, if we regard the number of gates outputting non-zero values as a measure for sparse activity, our results suggest that larger depth helps neural networks to acquire sparse activity.
View on arXiv