ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.00433
322
7
v1v2v3v4 (latest)

Modeling Discolsive Transparency in NLP Application Descriptions

Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021
2 January 2021
Michael Stephen Saxon
Sharon Levy
Xinyi Wang
Alon Albalak
Wenjie Wang
ArXiv (abs)PDFHTML
Abstract

Broader disclosive transparency−-−truth and clarity in communication regarding the function of AI systems−-−is widely considered desirable. Unfortunately, it is a nebulous concept, difficult to both define and quantify. Previous work has suggested that a trade-off exists between greater disclosive transparency and user confusion, where 'too much information' clouds a reader's understanding of what a system description means. We address both of these issues by connecting disclosive transparency to a "replication room" thought experiment, where the person describing the system attempts to convey the requisite information for a third party to reconstruct it. In this setting, the degree to which the necessary information is conveyed represents the description's transparency, and the level of expertise needed by the third party corresponds to potential user confusion. We introduce two neural language model-based probabilistic metrics to model these factors, and demonstrate that they correlate with user and expert opinions of system transparency, making them a valid objective proxy. Finally, we apply these metrics to study the relationships between transparency, confusion, and user perceptions in a corpus of NLP demo abstracts.

View on arXiv
Comments on this paper