ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.14976
13
5

A Multistakeholder Approach Towards Evaluating AI Transparency Mechanisms

27 March 2021
Ana Lucic
Madhulika Srikumar
Umang Bhatt
Alice Xiang
Ankur Taly
Q. V. Liao
Maarten de Rijke
ArXivPDFHTML
Abstract

Given that there are a variety of stakeholders involved in, and affected by, decisions from machine learning (ML) models, it is important to consider that different stakeholders have different transparency needs. Previous work found that the majority of deployed transparency mechanisms primarily serve technical stakeholders. In our work, we want to investigate how well transparency mechanisms might work in practice for a more diverse set of stakeholders by conducting a large-scale, mixed-methods user study across a range of organizations, within a particular industry such as health care, criminal justice, or content moderation. In this paper, we outline the setup for our study.

View on arXiv
Comments on this paper