211

Adversarial synapses: Hebbian/anti-Hebbian learning optimizes min-max objectives

Abstract

A promising approach towards understanding neural networks is to view them as implementations of online algorithms optimizing principled objectives. Existing neural algorithms capturing both neural activity dynamics and synaptic weight updates implement the same operation, either minimization or maximization of the objective, with respect to each variable. Here, we derive neural networks from principled min-max objectives: by minimizing with respect to neural activity and feedforward synaptic weights, and maximizing with respect to lateral synaptic weights. In turn, the min-max objectives are obtained via the Hubbard-Stratonovich (HS) transform of similarity matching objectives. The resulting networks perform dimensionality reduction of the input data resorting only to biologically plausible local learning rules. The min-max nature of the objective is reflected in the antagonism between Hebbian feedforward and anti-Hebbian lateral learning in derived networks. We prove that the only stable fixed points of the network dynamics correspond to the principal subspace projection (PSP) or the principal subspace whitening (PSW). Finally, from the min-max objectives we derive novel formulations of dimensionality reduction using fractional matrix exponents.

View on arXiv
Comments on this paper