FutureFill: Fast Generation from Convolutional Sequence Models
- AI4TSMQ
Main:9 Pages
8 Figures
Bibliography:3 Pages
16 Tables
Appendix:13 Pages
Abstract
We address the challenge of efficient auto-regressive generation in sequence prediction models by introducing FutureFill, a general-purpose fast generation method for any sequence prediction algorithm based on convolutional operators. FutureFill reduces generation time from quadratic to quasilinear in the context length. Moreover, when generating from a prompt, it requires a prefill cache whose size grows only with the number of tokens to be generated, often much smaller than the caches required by standard convolutional or attention based models. We validate our theoretical claims with experiments on synthetic tasks and demonstrate substantial efficiency gains when generating from a deep convolutional sequence prediction model.
View on arXivComments on this paper
