214
v1v2 (latest)

Capsule Networks Do Not Need to Model Everything

Pattern Recognition (Pattern Recogn.), 2022
Main:33 Pages
23 Figures
Bibliography:1 Pages
13 Tables
Appendix:22 Pages
Abstract

Capsule networks are biologically inspired neural networks that group neurons into vectors called capsules, each explicitly representing an object or one of its parts. The routing mechanism connects capsules in consecutive layers, forming a hierarchical structure between parts and objects, also known as a parse tree. Capsule networks often attempt to model all elements in an image, requiring large network sizes to handle complexities such as intricate backgrounds or irrelevant objects. However, this comprehensive modeling leads to increased parameter counts and computational inefficiencies. Our goal is to enable capsule networks to focus only on the object of interest, reducing the number of parse trees. We accomplish this with REM (Routing Entropy Minimization), a technique that minimizes the entropy of the parse tree-like structure. REM drives the model parameters distribution towards low entropy configurations through a pruning mechanism, significantly reducing the generation of intra-class parse trees. This empowers capsules to learn more stable and succinct representations with fewer parameters and negligible performance loss.

View on arXiv
Comments on this paper