Multimodal Representation Learning on Graphs
Artificial intelligence on graphs (graph AI) has achieved remarkable success in modeling complex systems, ranging from dynamical systems in biology to interacting particle systems in physics. The increasingly heterogeneous graph datasets call for multimodal graph AI algorithms to combine multiple inductive biases -- the set of assumptions that algorithms use to predict outputs of given inputs that they have not yet encountered. Learning on multimodal graph datasets presents fundamental challenges because inductive biases can vary by data modality and graphs might not be explicitly given in the input. To address these challenges, multimodal graph AI methods combine multiple modalities while leveraging cross-modal dependencies. Here, we survey 142 studies in graph AI and realize that diverse datasets are increasingly combined using graphs and fed into sophisticated multimodal models. These models stratify into image-, language-, and knowledge-grounded multimodal graph AI methods. Using this categorization of state-of-the-art methods, we put forward an algorithmic blueprint for multimodal graph AI, which we use to study existing methods and standardize the design of future methods for highly complex systems.
View on arXiv