261

Sparsifying Transformer Models with Differentiable Representation Pooling

Annual Meeting of the Association for Computational Linguistics (ACL), 2020
Abstract

We propose a novel method to sparsify attention in the Transformer model by learning to select the most-informative token representations during the training process, thus focusing on task-specific parts of the input. A reduction of quadratic time and memory complexity to sublinear was achieved due to a robust differentiable top-k operator. For example, our experiments on a challenging summarization task of long documents show that our method is much faster and up to 16 times more memory efficient while significantly outperforming both dense and state-of-the-art sparse transformer models. The method can be effortlessly applied to many models used in NLP and CV, simultaneously with other improvements since representation pooling addresses a different aspect of the attention's complexity problem.

View on arXiv
Comments on this paper