Evaluating -Gram Novelty of Language Models Using Rusty-DAWG

How novel are texts generated by language models (LMs) relative to their training corpora? In this work, we investigate the extent to which modern LMs generate -grams from their training data, evaluating both (i) the probability LMs assign to complete training -grams and (ii) -novelty, the proportion of -grams generated by an LM that did not appear in the training data (for arbitrarily large ). To enable arbitrary-length -gram search over a corpus in constant time, we develop Rusty-DAWG, a novel search tool inspired by indexing of genomic data. We compare the novelty of LM-generated text to human-written text and explore factors that affect generation novelty, focusing on the Pythia models. We find that, for , LM-generated text is less novel than human-written text, though it is more novel for smaller . Larger LMs and more constrained decoding strategies both decrease novelty. Finally, we show that LMs complete -grams with lower loss if they are more frequent in the training data. Overall, our results reveal factors influencing the novelty of LM-generated text, and we release Rusty-DAWG to facilitate further pretraining data research.
View on arXiv