27
9

Evaluating nn-Gram Novelty of Language Models Using Rusty-DAWG

Abstract

How novel are texts generated by language models (LMs) relative to their training corpora? In this work, we investigate the extent to which modern LMs generate nn-grams from their training data, evaluating both (i) the probability LMs assign to complete training nn-grams and (ii) nn-novelty, the proportion of nn-grams generated by an LM that did not appear in the training data (for arbitrarily large nn). To enable arbitrary-length nn-gram search over a corpus in constant time, we develop Rusty-DAWG, a novel search tool inspired by indexing of genomic data. We compare the novelty of LM-generated text to human-written text and explore factors that affect generation novelty, focusing on the Pythia models. We find that, for n>4n > 4, LM-generated text is less novel than human-written text, though it is more novel for smaller nn. Larger LMs and more constrained decoding strategies both decrease novelty. Finally, we show that LMs complete nn-grams with lower loss if they are more frequent in the training data. Overall, our results reveal factors influencing the novelty of LM-generated text, and we release Rusty-DAWG to facilitate further pretraining data research.

View on arXiv
Comments on this paper