1.0K

Universal Self-Attention Network for Graph Classification

The Web Conference (WWW), 2019
Abstract

Existing graph neural network-based models have biasedly used a supervised setting for graph classification, and they often share the conventional limitations in exploiting potential dependencies among nodes. To this end, we present U2GNN -- a novel embedding model leveraging the strength of the transformer self-attention network -- to learn low-dimensional embeddings of graphs. In particular, given an input graph, U2GNN applies a self-attention mechanism followed by a recurrent transition to update vector representation of each node from its neighbors. Thus, U2GNN can address the limitations in the existing models to produce plausible node embeddings whose sum is the final embedding of the whole graph. Experimental results in both supervised and unsupervised settings show that our U2GNN achieves new state-of-the-art performances on a range of well-known benchmark datasets for the graph classification task. To the best of our knowledge, this is the first work to train a GNN-based model in the unsupervised setting to improve classification performance.

View on arXiv
Comments on this paper