24
v1v2 (latest)

Implicit Bias and Invariance: How Hopfield Networks Efficiently Learn Graph Orbits

Michael Murray
Tenzin Chan
Kedar Karhadker
Christopher J. Hillar
Main:8 Pages
12 Figures
Bibliography:3 Pages
Appendix:20 Pages
Abstract

Many learning problems involve symmetries, and while invariance can be built into neural architectures, it can also emerge implicitly when training on group-structured data. We study this phenomenon in classical Hopfield networks and show they can infer the full isomorphism class of a graph from a small random sample. Our results reveal that: (i) graph isomorphism classes can be represented within a three-dimensional invariant subspace, (ii) using gradient descent to minimize energy flow (MEF) has an implicit bias toward norm-efficient solutions, which underpins a polynomial sample complexity bound for learning isomorphism classes, and (iii) across multiple learning rules, parameters converge toward the invariant subspace as sample sizes grow. Together, these findings highlight a unifying mechanism for generalization in Hopfield networks: a bias toward norm efficiency in learning drives the emergence of approximate invariance under group-structured data.

View on arXiv
Comments on this paper