147
v1v2 (latest)

Scaling Equilibrium Propagation to Deeper Neural Network Architectures

Main:5 Pages
4 Figures
Bibliography:2 Pages
Abstract

Equilibrium propagation has been proposed as a biologically plausible alternative to the backpropagation algorithm. The local nature of gradient computations, combined with the use of convergent RNNs to reach equilibrium states, make this approach well-suited for implementation on neuromorphic hardware. However, previous studies on equilibrium propagation have been restricted to networks containing only dense layers or relatively small architectures with a few convolutional layers followed by a final dense layer. These networks have a significant gap in accuracy compared to similarly sized feedforward networks trained with backpropagation. In this work, we introduce the Hopfield-Resnet architecture, which incorporates residual (or skip) connections in Hopfield networks with clipped ReLU\mathrm{ReLU} as the activation function. The proposed architectural enhancements enable the training of networks with nearly twice the number of layers reported in prior works. For example, Hopfield-Resnet13 achieves 93.92\% accuracy on CIFAR-10, which is \approx3.5\% higher than the previous best result and comparable to that provided by Resnet13 trained using backpropagation.

View on arXiv
Comments on this paper