Constrained Parameter Inference as a Principle for Learning
Learning in biological and artificial neural networks is often framed as a problem in which targeted error signals are used to directly guide parameter updating for more optimal network behaviour. Backpropagation of error (BP) is an example of such an approach and has proven to be a highly successful application of stochastic gradient descent to deep neural networks. However, BP relies on the transmission of gradient information directly to parameters, and frames learning as two completely separated passes. We propose constrained parameter inference (COPI) as a new principle for learning. The COPI approach to learning proposes that parameters might infer their updates based upon local neuron activities. This estimation of network parameters is possible under the constraints of decorrelated neural inputs and top-down perturbations of neural states, where credit is assigned to units instead of parameters directly. The form of the top-down perturbation determines which credit assignment method is being used, and when aligned with BP it constitutes a mixture of the forward and backward passes. We show that COPI is not only more biologically plausible but also provides distinct advantages for fast learning when compared to BP.
View on arXiv