The diversity across outputs generated by LLMs shapes perception of their quality and utility. High lexical diversity is often desirable, but there is no standard method to measure this property. Templated answer structures and ``canned'' responses across different documents are readily noticeable, but difficult to visualize across large corpora. This work aims to standardize measurement of text diversity. Specifically, we empirically investigate the convergent validity of existing scores across English texts, and we release diversity, an open-source Python package for measuring and extracting repetition in text. We also build a platform based on diversity for users to interactively explore repetition in text. We find that fast compression algorithms capture information similar to what is measured by slow-to-compute -gram overlap homogeneity scores. Further, a combination of measures -- compression ratios, self-repetition of long -grams, and Self-BLEU and BERTScore -- are sufficient to report, as they have low mutual correlation with each other.
View on arXiv@article{shaib2025_2403.00553, title={ Standardizing the Measurement of Text Diversity: A Tool and a Comparative Analysis of Scores }, author={ Chantal Shaib and Joe Barrow and Jiuding Sun and Alexa F. Siu and Byron C. Wallace and Ani Nenkova }, journal={arXiv preprint arXiv:2403.00553}, year={ 2025 } }