An important problem in signal processing and deep learning is to achieve \textit{invariance} to nuisance factors not relevant for the task. Since many of these factors are describable as the action of a group (e.g. rotations, translations, scalings), we want methods to be -invariant. The -Bispectrum extracts every characteristic of a given signal up to group action: for example, the shape of an object in an image, but not its orientation. Consequently, the -Bispectrum has been incorporated into deep neural network architectures as a computational primitive for -invariance\textemdash akin to a pooling mechanism, but with greater selectivity and robustness. However, the computational cost of the -Bispectrum (, with the size of the group) has limited its widespread adoption. Here, we show that the -Bispectrum computation contains redundancies that can be reduced into a \textit{selective -Bispectrum} with complexity. We prove desirable mathematical properties of the selective -Bispectrum and demonstrate how its integration in neural networks enhances accuracy and robustness compared to traditional approaches, while enjoying considerable speeds-up compared to the full -Bispectrum.
View on arXiv