Universal Graph Transformer Self-Attention Networks
The Web Conference (WWW), 2019
- ViT
Abstract
The transformer self-attention network has been extensively used in research domains such as computer vision, image processing, and natural language processing. The transformer, however, has not been actively used in graph neural networks, where constructing an advanced aggregation function is essential. To this end, we present an effective model, named UGformer, which -- by leveraging a transformer self-attention mechanism followed by a recurrent transition -- induces an advanced aggregation function to learn graph representations. Experimental results show that UGformer achieves state-of-the-art accuracies on well-known benchmark datasets for graph classification.
View on arXivComments on this paper
