ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.04525
  4. Cited By
Towards a multi-stakeholder value-based assessment framework for
  algorithmic systems

Towards a multi-stakeholder value-based assessment framework for algorithmic systems

9 May 2022
Mireia Yurrita
Dave Murray-Rust
Agathe Balayn
A. Bozzon
    MLAU
ArXivPDFHTML

Papers citing "Towards a multi-stakeholder value-based assessment framework for algorithmic systems"

14 / 14 papers shown
Title
A Catalog of Fairness-Aware Practices in Machine Learning Engineering
A Catalog of Fairness-Aware Practices in Machine Learning Engineering
Gianmario Voria
Giulia Sellitto
Carmine Ferrara
Francesco Abate
A. Lucia
F. Ferrucci
Gemma Catolino
Fabio Palomba
FaML
29
3
0
29 Aug 2024
From Model Performance to Claim: How a Change of Focus in Machine
  Learning Replicability Can Help Bridge the Responsibility Gap
From Model Performance to Claim: How a Change of Focus in Machine Learning Replicability Can Help Bridge the Responsibility Gap
Tianqi Kou
32
0
0
19 Apr 2024
Information That Matters: Exploring Information Needs of People Affected
  by Algorithmic Decisions
Information That Matters: Exploring Information Needs of People Affected by Algorithmic Decisions
Timothée Schmude
Laura M. Koesten
Torsten Moller
Sebastian Tschiatschek
13
3
0
24 Jan 2024
Unpacking Human-AI interactions: From interaction primitives to a design
  space
Unpacking Human-AI interactions: From interaction primitives to a design space
Konstantinos Tsiakas
Dave Murray-Rust
19
3
0
10 Jan 2024
The Value-Sensitive Conversational Agent Co-Design Framework
The Value-Sensitive Conversational Agent Co-Design Framework
Malak Sadek
Rafael A. Calvo
C. Mougenot
3DV
21
2
0
18 Oct 2023
Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy
  Risks
Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks
Hao-Ping Lee
Yu-Ju Yang
Thomas Serban Von Davier
J. Forlizzi
Sauvik Das
19
53
0
11 Oct 2023
Grasping AI: experiential exercises for designers
Grasping AI: experiential exercises for designers
Dave Murray-Rust
M. Lupetti
Iohanna Nicenboim
W. V. D. Hoog
19
12
0
02 Oct 2023
How do you feel? Measuring User-Perceived Value for Rejecting Machine
  Decisions in Hate Speech Detection
How do you feel? Measuring User-Perceived Value for Rejecting Machine Decisions in Hate Speech Detection
Philippe Lammerts
Philip Lippmann
Yen-Chia Hsu
Fabio Casati
Jie Yang
13
0
0
21 Jul 2023
`It is currently hodgepodge'': Examining AI/ML Practitioners' Challenges
  during Co-production of Responsible AI Values
`It is currently hodgepodge'': Examining AI/ML Practitioners' Challenges during Co-production of Responsible AI Values
R. Varanasi
Nitesh Goyal
21
46
0
14 Jul 2023
Certification Labels for Trustworthy AI: Insights From an Empirical
  Mixed-Method Study
Certification Labels for Trustworthy AI: Insights From an Empirical Mixed-Method Study
Nicolas Scharowski
Michaela Benk
S. J. Kühne
Léane Wettstein
Florian Brühlmann
17
12
0
15 May 2023
A Systematic Literature Review of Human-Centered, Ethical, and
  Responsible AI
A Systematic Literature Review of Human-Centered, Ethical, and Responsible AI
Mohammad Tahaei
Marios Constantinides
Daniele Quercia
Michael J. Muller
AI4TS
38
7
0
10 Feb 2023
Towards a Robust and Trustworthy Machine Learning System Development: An
  Engineering Perspective
Towards a Robust and Trustworthy Machine Learning System Development: An Engineering Perspective
Pulei Xiong
Scott Buffett
Shahrear Iqbal
Philippe Lamontagne
M. Mamun
Heather Molyneaux
OOD
29
15
0
08 Jan 2021
A Survey on Bias and Fairness in Machine Learning
A Survey on Bias and Fairness in Machine Learning
Ninareh Mehrabi
Fred Morstatter
N. Saxena
Kristina Lerman
Aram Galstyan
SyDa
FaML
294
4,187
0
23 Aug 2019
Fair prediction with disparate impact: A study of bias in recidivism
  prediction instruments
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
185
2,082
0
24 Oct 2016
1