9
2

On representation power of neural network-based graph embedding and beyond

Abstract

We consider the representation power of siamese-style similarity functions used in neural network-based graph embedding. The inner product similarity (IPS) with feature vectors computed via neural networks is commonly used for representing the strength of association between two nodes. However, only a little work has been done on the representation capability of IPS. A very recent work shed light on the nature of IPS and reveals that IPS has the capability of approximating any positive definite (PD) similarities. However, a simple example demonstrates the fundamental limitation of IPS to approximate non-PD similarities. We then propose a novel model named Shifted IPS (SIPS) that approximates any Conditionally PD (CPD) similarities arbitrary well. CPD is a generalization of PD with many examples such as negative Poincar\é distance and negative Wasserstein distance, thus SIPS has a potential impact to significantly improve the applicability of graph embedding without taking great care in configuring the similarity function. Our numerical experiments demonstrate the SIPS's superiority over IPS. In theory, we further extend SIPS beyond CPD by considering the inner product in Minkowski space so that it approximates more general similarities.

View on arXiv
Comments on this paper