ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.07655
28
0

The Selective G-Bispectrum and its Inversion: Applications to G-Invariant Networks

10 July 2024
Simon Mataigne
Johan Mathe
Sophia Sanborn
Christopher Hillar
Nina Miolane
ArXivPDFHTML
Abstract

An important problem in signal processing and deep learning is to achieve \textit{invariance} to nuisance factors not relevant for the task. Since many of these factors are describable as the action of a group GGG (e.g. rotations, translations, scalings), we want methods to be GGG-invariant. The GGG-Bispectrum extracts every characteristic of a given signal up to group action: for example, the shape of an object in an image, but not its orientation. Consequently, the GGG-Bispectrum has been incorporated into deep neural network architectures as a computational primitive for GGG-invariance\textemdash akin to a pooling mechanism, but with greater selectivity and robustness. However, the computational cost of the GGG-Bispectrum (O(∣G∣2)\mathcal{O}(|G|^2)O(∣G∣2), with ∣G∣|G|∣G∣ the size of the group) has limited its widespread adoption. Here, we show that the GGG-Bispectrum computation contains redundancies that can be reduced into a \textit{selective GGG-Bispectrum} with O(∣G∣)\mathcal{O}(|G|)O(∣G∣) complexity. We prove desirable mathematical properties of the selective GGG-Bispectrum and demonstrate how its integration in neural networks enhances accuracy and robustness compared to traditional approaches, while enjoying considerable speeds-up compared to the full GGG-Bispectrum.

View on arXiv
Comments on this paper