ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.05233
  4. Cited By
What's in a Name? Reducing Bias in Bios without Access to Protected
  Attributes

What's in a Name? Reducing Bias in Bios without Access to Protected Attributes

10 April 2019
Alexey Romanov
Maria De-Arteaga
Hanna M. Wallach
J. Chayes
C. Borgs
Alexandra Chouldechova
S. Geyik
K. Kenthapadi
Anna Rumshisky
Adam Tauman Kalai
ArXivPDFHTML

Papers citing "What's in a Name? Reducing Bias in Bios without Access to Protected Attributes"

22 / 22 papers shown
Title
Reducing Sensitivity on Speaker Names for Text Generation from Dialogues
Reducing Sensitivity on Speaker Names for Text Generation from Dialogues
Qi Jia
Haifeng Tang
Kenny Q. Zhu
24
2
0
23 May 2023
Shielded Representations: Protecting Sensitive Attributes Through
  Iterative Gradient-Based Projection
Shielded Representations: Protecting Sensitive Attributes Through Iterative Gradient-Based Projection
Shadi Iskander
Kira Radinsky
Yonatan Belinkov
38
17
0
17 May 2023
Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
  on AI-based Recruitment
Human-Centric Multimodal Machine Learning: Recent Advances and Testbed on AI-based Recruitment
Alejandro Peña
Ignacio Serna
Aythami Morales
Julian Fierrez
Alfonso Ortega
Ainhoa Herrarte
Manuel Alcántara
J. Ortega-Garcia
FaML
25
34
0
13 Feb 2023
Better Hit the Nail on the Head than Beat around the Bush: Removing
  Protected Attributes with a Single Projection
Better Hit the Nail on the Head than Beat around the Bush: Removing Protected Attributes with a Single Projection
P. Haghighatkhah
Antske Fokkens
Pia Sommerauer
Bettina Speckmann
Kevin Verbeek
32
10
0
08 Dec 2022
Subverting Fair Image Search with Generative Adversarial Perturbations
Subverting Fair Image Search with Generative Adversarial Perturbations
A. Ghosh
Matthew Jagielski
Chris L. Wilson
22
7
0
05 May 2022
How Gender Debiasing Affects Internal Model Representations, and Why It
  Matters
How Gender Debiasing Affects Internal Model Representations, and Why It Matters
Hadas Orgad
Seraphina Goldfarb-Tarrant
Yonatan Belinkov
26
18
0
14 Apr 2022
Speciesist Language and Nonhuman Animal Bias in English Masked Language
  Models
Speciesist Language and Nonhuman Animal Bias in English Masked Language Models
Masashi Takeshita
Rafal Rzepka
K. Araki
31
6
0
10 Mar 2022
Impact of Pretraining Term Frequencies on Few-Shot Reasoning
Impact of Pretraining Term Frequencies on Few-Shot Reasoning
Yasaman Razeghi
Robert L Logan IV
Matt Gardner
Sameer Singh
ReLM
LRM
32
150
0
15 Feb 2022
Learning Fair Representations via Rate-Distortion Maximization
Learning Fair Representations via Rate-Distortion Maximization
Somnath Basu Roy Chowdhury
Snigdha Chaturvedi
FaML
6
14
0
31 Jan 2022
BBQ: A Hand-Built Bias Benchmark for Question Answering
BBQ: A Hand-Built Bias Benchmark for Question Answering
Alicia Parrish
Angelica Chen
Nikita Nangia
Vishakh Padmakumar
Jason Phang
Jana Thompson
Phu Mon Htut
Sam Bowman
223
378
0
15 Oct 2021
A Survey of Race, Racism, and Anti-Racism in NLP
A Survey of Race, Racism, and Anti-Racism in NLP
Anjalie Field
Su Lin Blodgett
Zeerak Talat
Yulia Tsvetkov
42
122
0
21 Jun 2021
Societal Biases in Retrieved Contents: Measurement Framework and
  Adversarial Mitigation for BERT Rankers
Societal Biases in Retrieved Contents: Measurement Framework and Adversarial Mitigation for BERT Rankers
Navid Rekabsaz
Simone Kopeinik
Markus Schedl
24
60
0
28 Apr 2021
Contrastive Explanations for Model Interpretability
Contrastive Explanations for Model Interpretability
Alon Jacovi
Swabha Swayamdipta
Shauli Ravfogel
Yanai Elazar
Yejin Choi
Yoav Goldberg
44
95
0
02 Mar 2021
Exploring Text Specific and Blackbox Fairness Algorithms in Multimodal
  Clinical NLP
Exploring Text Specific and Blackbox Fairness Algorithms in Multimodal Clinical NLP
John Chen
Ian Berlot-Attwell
Safwan Hossain
Xindi Wang
Frank Rudzicz
FaML
37
7
0
19 Nov 2020
"What We Can't Measure, We Can't Understand": Challenges to Demographic
  Data Procurement in the Pursuit of Fairness
"What We Can't Measure, We Can't Understand": Challenges to Demographic Data Procurement in the Pursuit of Fairness
Mckane Andrus
Elena Spitzer
Jeffrey Brown
Alice Xiang
27
126
0
30 Oct 2020
Two Simple Ways to Learn Individual Fairness Metrics from Data
Two Simple Ways to Learn Individual Fairness Metrics from Data
Debarghya Mukherjee
Mikhail Yurochkin
Moulinath Banerjee
Yuekai Sun
FaML
26
96
0
19 Jun 2020
Null It Out: Guarding Protected Attributes by Iterative Nullspace
  Projection
Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection
Shauli Ravfogel
Yanai Elazar
Hila Gonen
Michael Twiton
Yoav Goldberg
23
369
0
16 Apr 2020
Bias in Multimodal AI: Testbed for Fair Automatic Recruitment
Bias in Multimodal AI: Testbed for Fair Automatic Recruitment
Alejandro Peña
Ignacio Serna
Aythami Morales
Julian Fierrez
30
56
0
15 Apr 2020
Towards Understanding Gender Bias in Relation Extraction
Towards Understanding Gender Bias in Relation Extraction
Andrew Gaut
Tony Sun
Shirlyn Tang
Yuxin Huang
Jing Qian
...
Jieyu Zhao
Diba Mirza
E. Belding
Kai-Wei Chang
William Yang Wang
FaML
33
40
0
09 Nov 2019
Toward Gender-Inclusive Coreference Resolution
Toward Gender-Inclusive Coreference Resolution
Yang Trista Cao
Hal Daumé
31
141
0
30 Oct 2019
Improving fairness in machine learning systems: What do industry
  practitioners need?
Improving fairness in machine learning systems: What do industry practitioners need?
Kenneth Holstein
Jennifer Wortman Vaughan
Hal Daumé
Miroslav Dudík
Hanna M. Wallach
FaML
HAI
192
742
0
13 Dec 2018
Fair prediction with disparate impact: A study of bias in recidivism
  prediction instruments
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
207
2,090
0
24 Oct 2016
1