ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.18564
22
2

A General Framework for Robust G-Invariance in G-Equivariant Networks

28 October 2023
Sophia Sanborn
Nina Miolane
    AAML
    OOD
ArXivPDFHTML
Abstract

We introduce a general method for achieving robust group-invariance in group-equivariant convolutional neural networks (GGG-CNNs), which we call the GGG-triple-correlation (GGG-TC) layer. The approach leverages the theory of the triple-correlation on groups, which is the unique, lowest-degree polynomial invariant map that is also complete. Many commonly used invariant maps--such as the max--are incomplete: they remove both group and signal structure. A complete invariant, by contrast, removes only the variation due to the actions of the group, while preserving all information about the structure of the signal. The completeness of the triple correlation endows the GGG-TC layer with strong robustness, which can be observed in its resistance to invariance-based adversarial attacks. In addition, we observe that it yields measurable improvements in classification accuracy over standard Max GGG-Pooling in GGG-CNN architectures. We provide a general and efficient implementation of the method for any discretized group, which requires only a table defining the group's product structure. We demonstrate the benefits of this method for GGG-CNNs defined on both commutative and non-commutative groups--SO(2)SO(2)SO(2), O(2)O(2)O(2), SO(3)SO(3)SO(3), and O(3)O(3)O(3) (discretized as the cyclic C8C8C8, dihedral D16D16D16, chiral octahedral OOO and full octahedral OhO_hOh​ groups)--acting on R2\mathbb{R}^2R2 and R3\mathbb{R}^3R3 on both GGG-MNIST and GGG-ModelNet10 datasets.

View on arXiv
Comments on this paper