256
v1v2 (latest)

Esoteric Language Models

Main:11 Pages
27 Figures
Bibliography:4 Pages
13 Tables
Appendix:25 Pages
Abstract

Diffusion-based language models offer a compelling alternative to autoregressive (AR) models by enabling parallel and controllable generation. Within this family, Masked Diffusion Models (MDMs) currently perform best but still underperform AR models in perplexity and lack key inference-time efficiency features, most notably KV caching. We introduce Eso-LMs, a new family of models that fuses AR and MDM paradigms, smoothly interpolating between their perplexities while overcoming their respective limitations. Unlike prior work, which uses transformers with bidirectional attention as MDM denoisers, we exploit the connection between MDMs and Any-Order autoregressive models and adopt causal attention. This design lets us compute the exact likelihood of MDMs for the first time and, crucially, enables us \to introduce KV caching for MDMs while preserving parallel generation for the first time, significantly improving inference efficiency. Combined with an optimized sampling schedule, Eso-LMs achieves a new state of the art on the speed-quality Pareto frontier for unconditional generation. On long contexts, it yields 1465×\mathbf{14 - 65{}\times} faster inference than standard MDMs and 34×\mathbf{3 - 4{}\times} faster inference than prior semi-autoregressive approaches. We provide code, model checkpoints, and video tutorials on the project page:this http URL

View on arXiv
Comments on this paper