259
v1v2v3v4 (latest)

Sparsifying Transformer Models with Trainable Representation Pooling

Annual Meeting of the Association for Computational Linguistics (ACL), 2020
Abstract

We propose a novel method to sparsify attention in the Transformer model by learning to select the most-informative token representations during the training process, thus focusing on the task-specific parts of an input. A reduction of quadratic time and memory complexity to sublinear was achieved due to a robust trainable top-kk operator. Our experiments on a challenging long document summarization task show that even our simple baseline performs comparably to the current SOTA, and with trainable pooling, we can retain its top quality, while being 1.8×1.8\times faster during training, 4.5×4.5\times faster during inference, and up to 13×13\times more computationally efficient in the decoder.

View on arXiv
Comments on this paper