A statistically consistent measure of Semantic Variability using Language Models

Abstract
To address the issue of variability in the output generated by a language model, we present a measure of semantic variability that is statistically consistent under mild assumptions. This measure, denoted as semantic spectral entropy, is a easy to implement algorithm that requires just off the shelf language models. We put very few restrictions on the language models and we have shown in a clear simulation studies that such method can generate accurate metric despite randomness that arise from the language models.
View on arXiv@article{liu2025_2502.00507, title={ A statistically consistent measure of Semantic Variability using Language Models }, author={ Yi Liu }, journal={arXiv preprint arXiv:2502.00507}, year={ 2025 } }
Comments on this paper