14
5

On Non-Linear operators for Geometric Deep Learning

Abstract

This work studies operators mapping vector and scalar fields defined over a manifold M\mathcal{M}, and which commute with its group of diffeomorphisms Diff(M)\text{Diff}(\mathcal{M}). We prove that in the case of scalar fields Lωp(M,R)L^p_\omega(\mathcal{M,\mathbb{R}}), those operators correspond to point-wise non-linearities, recovering and extending known results on Rd\mathbb{R}^d. In the context of Neural Networks defined over M\mathcal{M}, it indicates that point-wise non-linear operators are the only universal family that commutes with any group of symmetries, and justifies their systematic use in combination with dedicated linear operators commuting with specific symmetries. In the case of vector fields Lωp(M,TM)L^p_\omega(\mathcal{M},T\mathcal{M}), we show that those operators are solely the scalar multiplication. It indicates that Diff(M)\text{Diff}(\mathcal{M}) is too rich and that there is no universal class of non-linear operators to motivate the design of Neural Networks over the symmetries of M\mathcal{M}.

View on arXiv
Comments on this paper