On the Sheafification of Higher-Order Message Passing
- AI4CE

Recent work in Topological Deep Learning (TDL) seeks to generalize graph learning's preeminent paradigm to more complex relational structures: simplicial complexes, cell complexes, hypergraphs, and combinations thereof. Many approaches to such (HOMP) admit formulation in terms of nonlinear diffusion with the Hodge (combinatorial) Laplacian, a graded operator which carries an inductive bias that dimension- data features correlate with dimension- topological features encoded in the (singular) cohomology of the underlying domain. For this recovers the graph Laplacian and its well-studied homophily bias. In higher gradings, however, the Hodge Laplacian's bias is more opaque and potentially even degenerate. In this essay, we position sheaf theory as a natural and principled formalism for modifying the Hodge Laplacian's diffusion-mediated interface between local and global descriptors toward more expressive message passing. The sheaf Laplacian's inductive bias correlates dimension- data features with dimension- cohomology, a data-aware generalization of singular cohomology. We will contextualize and novelly extend prior theory on sheaf diffusion in graph learning () in such a light -- and explore how it fails to generalize to -- before developing novel theory and practice for the higher-order setting. Our exposition is accompanied by a self-contained introduction shepherding sheaves from the abstract to the applied.
View on arXiv