157

Faster Transformer Decoding: N-gram Masked Self-Attention

Abstract

Motivated by the fact that most of the information relevant to the prediction of target tokens is drawn from the source sentence S=s1,,sSS=s_1, \ldots, s_S, we propose truncating the target-side window used for computing self-attention by making an NN-gram assumption. Experiments on WMT EnDe and EnFr data sets show that the NN-gram masked self-attention model loses very little in BLEU score for NN values in the range 4,,84, \ldots, 8, depending on the task.

View on arXiv
Comments on this paper