Learning Edge Representations via Low-Rank Asymmetric Projections
We propose a method for learning continuous-space vector representation of graphs, which preserves directed edge information. Unlike previous works that utilize random walks to learn structure-preserving graph embeddings, we (1) explicitly model an edge as a function of node embeddings that we jointly learn with the node embeddings, and we (2) propose a novel objective which we call \emph{graph likelihood}, defined in terms of the random walk statistics. Individually, both of these contributions improve the learned representations, especially when there are memory constraints on the total size of the embeddings. When combined, our contributions enable us to significantly improve the state-of-the-art by learning more concise representations that better preserve the graph structure. We evaluate our method on a variety of link-prediction task including social networks, collaboration networks, and protein interactions, showing that our proposed method learn representations with error reductions of up to 76% and 55%, respectively, on directed and undirected graphs. In addition, we show that the representations learned by our method more effectively utilize their provided space -- on several datasets, they outperform all baseline methods while using \emph{16 times less} space to represent each node.
View on arXiv