-equivariant convolutional neural networks (GCNNs) is a geometric deep learning model for data defined on a homogeneous -space . GCNNs are designed to respect the global symmetry in , thereby facilitating learning. In this paper, we analyze GCNNs on homogeneous spaces in the case of unimodular Lie groups and compact subgroups . We demonstrate that homogeneous vector bundles is the natural setting for GCNNs. We also use reproducing kernel Hilbert spaces to obtain a precise criterion for expressing -equivariant layers as convolutional layers. This criterion is then rephrased as a bandwidth criterion, leading to even stronger results for some groups.
View on arXiv