Exponential Separations in Symmetric Neural Networks
Aaron Zweig
Joan Bruna

Abstract
In this work we demonstrate a novel separation between symmetric neural network architectures. Specifically, we consider the Relational Network~\parencite{santoro2017simple} architecture as a natural generalization of the DeepSets~\parencite{zaheer2017deep} architecture, and study their representational gap. Under the restriction to analytic activation functions, we construct a symmetric function acting on sets of size with elements in dimension , which can be efficiently approximated by the former architecture, but provably requires width exponential in and for the latter.
View on arXivComments on this paper