16
50

Infini-gram: Scaling Unbounded n-gram Language Models to a Trillion Tokens

Abstract

Are nn-gram language models still relevant in this era of neural large language models (LLMs)? Our answer is yes, and we showcase their values in both text analysis and improving neural LLMs. This was done by modernizing nn-gram LMs in two aspects. First, we train them at the same data scale as neural LLMs -- 5 trillion tokens. This is the largest nn-gram LM ever built. Second, existing nn-gram LMs use small nn which hinders their performance; we instead allow nn to be arbitrarily large, by introducing a new \infty-gram LM with backoff. Instead of pre-computing nn-gram count tables (which would be very expensive), we develop an engine named infini-gram -- powered by suffix arrays -- that can compute \infty-gram (as well as nn-gram with arbitrary nn) probabilities with millisecond-level latency. The \infty-gram framework and infini-gram engine enable us to conduct many novel and interesting analyses of human-written and machine-generated text: we find that the \infty-gram LM has fairly high accuracy for next-token prediction (47%), and can complement neural LLMs to greatly reduce their perplexity. When analyzing machine-generated text, we also observe irregularities in the machine--\infty-gram agreement level with respect to the suffix length, which indicates deficiencies in neural LLM pretraining and the positional embeddings of Transformers.

View on arXiv
@article{liu2025_2401.17377,
  title={ Infini-gram: Scaling Unbounded n-gram Language Models to a Trillion Tokens },
  author={ Jiacheng Liu and Sewon Min and Luke Zettlemoyer and Yejin Choi and Hannaneh Hajishirzi },
  journal={arXiv preprint arXiv:2401.17377},
  year={ 2025 }
}
Comments on this paper