25
2

Superposed Decoding: Multiple Generations from a Single Autoregressive Inference Pass

Abstract

Many applications today provide users with multiple auto-complete drafts as they type, including GitHub's code completion, Gmail's smart compose, and Apple's messaging auto-suggestions. Under the hood, language models support this by running an autoregressive inference pass to provide a draft. Consequently, providing kk drafts to the user requires running an expensive language model kk times. To alleviate the computation cost of running kk inference passes, we propose Superposed Decoding, a new decoding algorithm that generates kk drafts at the computation cost of one autoregressive inference pass. We achieve this by feeding a superposition of the most recent token embeddings from the kk drafts as input to the next decoding step of the language model. At every inference step we combine the kk drafts with the top-kk tokens to get k2k^2 new drafts and cache the kk most likely options, using an n-gram interpolation with minimal compute overhead to filter out incoherent generations. Our experiments show that kk drafts from Superposed Decoding are at least as coherent and factual as Nucleus Sampling and Greedy Decoding respectively, while being at least 2.44×2.44\times faster for k3k\ge3. In a compute-normalized setting, user evaluations demonstrably favor text generated by Superposed Decoding over Nucleus Sampling. Code and more examples open-sourced at https://github.com/RAIVNLab/SuperposedDecoding.

View on arXiv
Comments on this paper