Measuring Diversity in Synthetic Datasets

Large language models (LLMs) are widely adopted to generate synthetic datasets for various natural language processing (NLP) tasks, such as text classification and summarization. However, accurately measuring the diversity of these synthetic datasets-an aspect crucial for robust model performance-remains a significant challenge. In this paper, we introduce DCScore, a novel method for measuring synthetic dataset diversity from a classification perspective. Specifically, DCScore formulates diversity evaluation as a sample classification task, leveraging mutual relationships among samples. We further provide theoretical verification of the diversity-related axioms satisfied by DCScore, highlighting its role as a principled diversity evaluation method. Experimental results on synthetic datasets reveal that DCScore enjoys a stronger correlation with multiple diversity pseudo-truths of evaluated datasets, underscoring its effectiveness. Moreover, both empirical and theoretical evidence demonstrate that DCScore substantially reduces computational costs compared to existing approaches. Code is available at:this https URL.
View on arXiv@article{zhu2025_2502.08512, title={ Measuring Diversity in Synthetic Datasets }, author={ Yuchang Zhu and Huizhe Zhang and Bingzhe Wu and Jintang Li and Zibin Zheng and Peilin Zhao and Liang Chen and Yatao Bian }, journal={arXiv preprint arXiv:2502.08512}, year={ 2025 } }