ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.04610
  4. Cited By
Towards Intersectionality in Machine Learning: Including More
  Identities, Handling Underrepresentation, and Performing Evaluation

Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentation, and Performing Evaluation

10 May 2022
Angelina Wang
V. V. Ramaswamy
Olga Russakovsky
    FaML
ArXivPDFHTML

Papers citing "Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentation, and Performing Evaluation"

17 / 17 papers shown
Title
Interpretable and Fair Mechanisms for Abstaining Classifiers
Interpretable and Fair Mechanisms for Abstaining Classifiers
Daphne Lenders
Andrea Pugnana
Roberto Pellungrini
Toon Calders
D. Pedreschi
F. Giannotti
FaML
82
1
0
24 Mar 2025
A Tutorial On Intersectionality in Fair Rankings
Chiara Criscuolo
Davide Martinenghi
Giuseppe Piccirillo
FaML
65
0
0
07 Feb 2025
It's complicated. The relationship of algorithmic fairness and non-discrimination regulations in the EU AI Act
It's complicated. The relationship of algorithmic fairness and non-discrimination regulations in the EU AI Act
Kristof Meding
FaML
62
1
0
22 Jan 2025
A Catalog of Fairness-Aware Practices in Machine Learning Engineering
A Catalog of Fairness-Aware Practices in Machine Learning Engineering
Gianmario Voria
Giulia Sellitto
Carmine Ferrara
Francesco Abate
A. Lucia
F. Ferrucci
Gemma Catolino
Fabio Palomba
FaML
29
3
0
29 Aug 2024
Addressing Discretization-Induced Bias in Demographic Prediction
Addressing Discretization-Induced Bias in Demographic Prediction
Evan Dong
Aaron Schein
Yixin Wang
Nikhil Garg
27
3
0
27 May 2024
A structured regression approach for evaluating model performance across
  intersectional subgroups
A structured regression approach for evaluating model performance across intersectional subgroups
Christine Herlihy
Kimberly Truong
Alexandra Chouldechova
Miroslav Dudik
22
4
0
26 Jan 2024
"Kelly is a Warm Person, Joseph is a Role Model": Gender Biases in
  LLM-Generated Reference Letters
"Kelly is a Warm Person, Joseph is a Role Model": Gender Biases in LLM-Generated Reference Letters
Yixin Wan
George Pu
Jiao Sun
Aparna Garimella
Kai-Wei Chang
Nanyun Peng
27
159
0
13 Oct 2023
Bias Testing and Mitigation in LLM-based Code Generation
Bias Testing and Mitigation in LLM-based Code Generation
Dong Huang
Qingwen Bu
Jie M. Zhang
Xiaofei Xie
Junjie Chen
Heming Cui
33
20
0
03 Sep 2023
The Ecological Fallacy in Annotation: Modelling Human Label Variation
  goes beyond Sociodemographics
The Ecological Fallacy in Annotation: Modelling Human Label Variation goes beyond Sociodemographics
Matthias Orlikowski
Paul Röttger
Philipp Cimiano
Italy
19
26
0
20 Jun 2023
Diversity and Inclusion in Artificial Intelligence
Diversity and Inclusion in Artificial Intelligence
Didar Zowghi
F. Rimini
6
26
0
22 May 2023
An Empirical Analysis of Fairness Notions under Differential Privacy
An Empirical Analysis of Fairness Notions under Differential Privacy
Anderson Santana de Oliveira
Caelin Kaplan
Khawla Mallat
Tanmay Chakraborty
FedML
8
7
0
06 Feb 2023
Manifestations of Xenophobia in AI Systems
Manifestations of Xenophobia in AI Systems
Nenad Tomašev
J. L. Maynard
Iason Gabriel
24
9
0
15 Dec 2022
Subgroup Robustness Grows On Trees: An Empirical Baseline Investigation
Subgroup Robustness Grows On Trees: An Empirical Baseline Investigation
Josh Gardner
Zoran Popovic
Ludwig Schmidt
OOD
19
22
0
23 Nov 2022
Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm
  Reduction
Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction
Renee Shelby
Shalaleh Rismani
Kathryn Henne
AJung Moon
Negar Rostamzadeh
...
N'Mah Yilla-Akbari
Jess Gallegos
A. Smart
Emilio Garcia
Gurleen Virk
34
188
0
11 Oct 2022
When Personalization Harms: Reconsidering the Use of Group Attributes in
  Prediction
When Personalization Harms: Reconsidering the Use of Group Attributes in Prediction
Vinith M. Suriyakumar
Marzyeh Ghassemi
Berk Ustun
22
6
0
04 Jun 2022
Characterizing Intersectional Group Fairness with Worst-Case Comparisons
Characterizing Intersectional Group Fairness with Worst-Case Comparisons
A. Ghosh
Lea Genuit
Mary Reagan
FaML
79
51
0
05 Jan 2021
How Algorithmic Confounding in Recommendation Systems Increases
  Homogeneity and Decreases Utility
How Algorithmic Confounding in Recommendation Systems Increases Homogeneity and Decreases Utility
A. Chaney
Brandon M Stewart
Barbara E. Engelhardt
CML
161
312
0
30 Oct 2017
1