11
10

Sparsifying Transformer Models with Trainable Representation Pooling

Abstract

We propose a novel method to sparsify attention in the Transformer model by learning to select the most-informative token representations during the training process, thus focusing on the task-specific parts of an input. A reduction of quadratic time and memory complexity to sublinear was achieved due to a robust trainable top-kk operator. Our experiments on a challenging long document summarization task show that even our simple baseline performs comparably to the current SOTA, and with trainable pooling, we can retain its top quality, while being 1.8×1.8\times faster during training, 4.5×4.5\times faster during inference, and up to 13×13\times more computationally efficient in the decoder.

View on arXiv
Comments on this paper