
Broader disclosive transparencytruth and clarity in communication regarding the function of AI systemsis widely considered desirable. Unfortunately, it is a nebulous concept, difficult to both define and quantify. Previous work has suggested that a trade-off exists between greater disclosive transparency and user confusion, where 'too much information' clouds a reader's understanding of what a system description means. We address both of these issues by connecting disclosive transparency to a "replication room" thought experiment, where the person describing the system attempts to convey the requisite information for a third party to reconstruct it. In this setting, the degree to which the necessary information is conveyed represents the description's transparency, and the level of expertise needed by the third party corresponds to potential user confusion. We introduce two neural language model-based probabilistic metrics to model these factors, and demonstrate that they correlate with user and expert opinions of system transparency, making them a valid objective proxy. Finally, we apply these metrics to study the relationships between transparency, confusion, and user perceptions in a corpus of NLP demo abstracts.
View on arXiv