ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.00474
  4. Cited By
Towards Responsible AI: A Design Space Exploration of Human-Centered
  Artificial Intelligence User Interfaces to Investigate Fairness

Towards Responsible AI: A Design Space Exploration of Human-Centered Artificial Intelligence User Interfaces to Investigate Fairness

1 June 2022
Yuri Nakao
Lorenzo Strappelli
Simone Stumpf
A. Naseer
D. Regoli
Giulia Del Gamba
ArXivPDFHTML

Papers citing "Towards Responsible AI: A Design Space Exploration of Human-Centered Artificial Intelligence User Interfaces to Investigate Fairness"

6 / 6 papers shown
Title
Towards Multi-Stakeholder Evaluation of ML Models: A Crowdsourcing Study on Metric Preferences in Job-matching System
Takuya Yokota
Yuri Nakao
39
0
0
03 Mar 2025
Ethical AI Governance: Methods for Evaluating Trustworthy AI
Ethical AI Governance: Methods for Evaluating Trustworthy AI
Louise McCormack
Malika Bendechache
41
1
0
28 Aug 2024
EARN Fairness: Explaining, Asking, Reviewing, and Negotiating Artificial Intelligence Fairness Metrics Among Stakeholders
EARN Fairness: Explaining, Asking, Reviewing, and Negotiating Artificial Intelligence Fairness Metrics Among Stakeholders
Lin Luo
Yuri Nakao
Mathieu Chollet
Hiroya Inakoshi
Simone Stumpf
38
0
0
16 Jul 2024
Break Out of a Pigeonhole: A Unified Framework for Examining
  Miscalibration, Bias, and Stereotype in Recommender Systems
Break Out of a Pigeonhole: A Unified Framework for Examining Miscalibration, Bias, and Stereotype in Recommender Systems
Yongsu Ahn
Yu-Ru Lin
CML
32
3
0
29 Dec 2023
Fairness Evaluation in Text Classification: Machine Learning
  Practitioner Perspectives of Individual and Group Fairness
Fairness Evaluation in Text Classification: Machine Learning Practitioner Perspectives of Individual and Group Fairness
Zahra Ashktorab
Benjamin Hoover
Mayank Agarwal
Casey Dugan
Werner Geyer
Han Yang
Mikhail Yurochkin
FaML
27
17
0
01 Mar 2023
Fair prediction with disparate impact: A study of bias in recidivism
  prediction instruments
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
207
2,084
0
24 Oct 2016
1