ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2001.04879
  4. Cited By
Keeping Community in the Loop: Understanding Wikipedia Stakeholder
  Values for Machine Learning-Based Systems

Keeping Community in the Loop: Understanding Wikipedia Stakeholder Values for Machine Learning-Based Systems

14 January 2020
C. E. Smith
Bowen Yu
Anjali Srivastava
Aaron L Halfaker
Loren G. Terveen
Haiyi Zhu
    KELM
ArXivPDFHTML

Papers citing "Keeping Community in the Loop: Understanding Wikipedia Stakeholder Values for Machine Learning-Based Systems"

12 / 12 papers shown
Title
Studying Up Public Sector AI: How Networks of Power Relations Shape
  Agency Decisions Around AI Design and Use
Studying Up Public Sector AI: How Networks of Power Relations Shape Agency Decisions Around AI Design and Use
Anna Kawakami
Amanda Coston
Hoda Heidari
Kenneth Holstein
Haiyi Zhu
44
8
0
21 May 2024
Summaries, Highlights, and Action items: Design, implementation and evaluation of an LLM-powered meeting recap system
Summaries, Highlights, and Action items: Design, implementation and evaluation of an LLM-powered meeting recap system
Sumit Asthana
Sagi Hilleli
Pengcheng He
Aaron L Halfaker
35
11
0
28 Jul 2023
`It is currently hodgepodge'': Examining AI/ML Practitioners' Challenges
  during Co-production of Responsible AI Values
`It is currently hodgepodge'': Examining AI/ML Practitioners' Challenges during Co-production of Responsible AI Values
R. Varanasi
Nitesh Goyal
26
46
0
14 Jul 2023
"Thoughts & Prayers'' or ":Heart Reaction: & :Prayer Reaction:'': How
  the Release of New Reactions on CaringBridge Reshapes Supportive
  Communication During Health Crises
"Thoughts & Prayers'' or ":Heart Reaction: & :Prayer Reaction:'': How the Release of New Reactions on CaringBridge Reshapes Supportive Communication During Health Crises
C. E. Smith
Hannah Miller Hillberg
Zachary Levonian
11
1
0
14 Apr 2023
Imagining new futures beyond predictive systems in child welfare: A
  qualitative study with impacted stakeholders
Imagining new futures beyond predictive systems in child welfare: A qualitative study with impacted stakeholders
Logan Stapleton
Min Hun Lee
Diana Qing
Mary-Frances Wright
Alexandra Chouldechova
Kenneth Holstein
Zhiwei Steven Wu
Haiyi Zhu
38
55
0
18 May 2022
Human-AI Collaboration via Conditional Delegation: A Case Study of
  Content Moderation
Human-AI Collaboration via Conditional Delegation: A Case Study of Content Moderation
Vivian Lai
Samuel Carton
Rajat Bhatnagar
Vera Liao
Yunfeng Zhang
Chenhao Tan
18
129
0
25 Apr 2022
Improving Human-AI Partnerships in Child Welfare: Understanding Worker
  Practices, Challenges, and Desires for Algorithmic Decision Support
Improving Human-AI Partnerships in Child Welfare: Understanding Worker Practices, Challenges, and Desires for Algorithmic Decision Support
Anna Kawakami
Venkatesh Sivaraman
H. Cheng
Logan Stapleton
Yanghuidi Cheng
Diana Qing
Adam Perer
Zhiwei Steven Wu
Haiyi Zhu
Kenneth Holstein
28
106
0
05 Apr 2022
Jury Learning: Integrating Dissenting Voices into Machine Learning
  Models
Jury Learning: Integrating Dissenting Voices into Machine Learning Models
Mitchell L. Gordon
Michelle S. Lam
J. Park
Kayur Patel
Jeffrey T. Hancock
Tatsunori Hashimoto
Michael S. Bernstein
19
146
0
07 Feb 2022
Stakeholder Participation in AI: Beyond "Add Diverse Stakeholders and
  Stir"
Stakeholder Participation in AI: Beyond "Add Diverse Stakeholders and Stir"
Fernando Delgado
Stephen Yang
Michael A. Madaio
Qian Yang
20
55
0
01 Nov 2021
Does the Whole Exceed its Parts? The Effect of AI Explanations on
  Complementary Team Performance
Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance
Gagan Bansal
Tongshuang Wu
Joyce Zhou
Raymond Fok
Besmira Nushi
Ece Kamar
Marco Tulio Ribeiro
Daniel S. Weld
23
577
0
26 Jun 2020
Disseminating Research News in HCI: Perceived Hazards, How-To's, and
  Opportunities for Innovation
Disseminating Research News in HCI: Perceived Hazards, How-To's, and Opportunities for Innovation
C. E. Smith
Eduardo Nevarez
Haiyi Zhu
16
19
0
14 Jan 2020
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
242
3,681
0
28 Feb 2017
1