Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1901.09565
Cited By
Fairness in representation: quantifying stereotyping as a representational harm
28 January 2019
Mohsen Abbasi
Sorelle A. Friedler
C. Scheidegger
Suresh Venkatasubramanian
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Fairness in representation: quantifying stereotyping as a representational harm"
15 / 15 papers shown
Title
Position is Power: System Prompts as a Mechanism of Bias in Large Language Models (LLMs)
Anna Neumann
Elisabeth Kirsten
Muhammad Bilal Zafar
Jatinder Singh
59
0
0
27 May 2025
Muslim-Violence Bias Persists in Debiased GPT Models
Babak Hemmatian
Razan Baltaji
Lav Varshney
36
3
0
25 Oct 2023
Gender Stereotyping Impact in Facial Expression Recognition
Iris Dominguez-Catena
D. Paternain
M. Galar
45
7
0
11 Oct 2022
Measuring and mitigating voting access disparities: a study of race and polling locations in Florida and North Carolina
Mohsen Abbasi
Suresh Venkatasubramanian
Sorelle A. Friedler
K. Lum
Calvin Barrett
57
4
0
30 May 2022
The worst of both worlds: A comparative analysis of errors in learning from data in psychology and machine learning
Jessica Hullman
Sayash Kapoor
Priyanka Nanayakkara
Andrew Gelman
Arvind Narayanan
147
39
0
12 Mar 2022
Explainability for identification of vulnerable groups in machine learning models
Inga Strümke
Marija Slavkovik
FaML
60
3
0
01 Mar 2022
Feature-based Individual Fairness in k-Clustering
Debajyoti Kar
Mert Kosan
Debmalya Mandal
Sourav Medya
A. Silva
P. Dey
Swagato Sanyal
FaML
97
10
0
09 Sep 2021
The Values Encoded in Machine Learning Research
Abeba Birhane
Pratyusha Kalluri
Dallas Card
William Agnew
Ravit Dotan
Michelle Bao
89
295
0
29 Jun 2021
Obstructing Classification via Projection
P. Haghighatkhah
Wouter Meulemans
Bettina Speckmann
Jérôme Urhausen
Kevin Verbeek
49
6
0
19 May 2021
Precarity: Modeling the Long Term Effects of Compounded Decisions on Individual Instability
Pegah Nokhiz
Aravinda Kanchana Ruwanpathirana
Neal Patwari
Suresh Venkatasubramanian
123
8
0
24 Apr 2021
Fairness for Unobserved Characteristics: Insights from Technological Impacts on Queer Communities
Nenad Tomašev
Kevin R. McKee
Jackie Kay
Shakir Mohamed
FaML
85
89
0
03 Feb 2021
Neural Machine Translation Doesn't Translate Gender Coreference Right Unless You Make It
Danielle Saunders
Rosie Sallis
Bill Byrne
78
64
0
11 Oct 2020
UnQovering Stereotyping Biases via Underspecified Questions
Tao Li
Tushar Khot
Daniel Khashabi
Ashish Sabharwal
Vivek Srikumar
79
138
0
06 Oct 2020
OSCaR: Orthogonal Subspace Correction and Rectification of Biases in Word Embeddings
Sunipa Dev
Tao Li
J. M. Phillips
Vivek Srikumar
114
55
0
30 Jun 2020
Fair clustering via equitable group representations
Mohsen Abbasi
Aditya Bhaskara
Suresh Venkatasubramanian
FaML
FedML
94
87
0
19 Jun 2020
1