ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.08774
  4. Cited By
Towards evaluating and eliciting high-quality documentation for
  intelligent systems

Towards evaluating and eliciting high-quality documentation for intelligent systems

17 November 2020
David Piorkowski
D. González
John T. Richards
Stephanie Houde
ArXivPDFHTML

Papers citing "Towards evaluating and eliciting high-quality documentation for intelligent systems"

4 / 4 papers shown
Title
CRScore: Grounding Automated Evaluation of Code Review Comments in Code Claims and Smells
CRScore: Grounding Automated Evaluation of Code Review Comments in Code Claims and Smells
Atharva Naik
Marcus Alenius
Daniel Fried
Carolyn Rose
21
0
0
29 Sep 2024
Evaluating a Methodology for Increasing AI Transparency: A Case Study
Evaluating a Methodology for Increasing AI Transparency: A Case Study
David Piorkowski
John T. Richards
Michael Hind
35
5
0
24 Jan 2022
Trustworthy AI: From Principles to Practices
Trustworthy AI: From Principles to Practices
Bo-wen Li
Peng Qi
Bo Liu
Shuai Di
Jingen Liu
Jiquan Pei
Jinfeng Yi
Bowen Zhou
117
355
0
04 Oct 2021
Improving fairness in machine learning systems: What do industry
  practitioners need?
Improving fairness in machine learning systems: What do industry practitioners need?
Kenneth Holstein
Jennifer Wortman Vaughan
Hal Daumé
Miroslav Dudík
Hanna M. Wallach
FaML
HAI
192
742
0
13 Dec 2018
1