ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.03862
  4. Cited By
Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases
  in Word Embeddings But do not Remove Them

Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them

9 March 2019
Hila Gonen
Yoav Goldberg
ArXivPDFHTML

Papers citing "Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them"

8 / 308 papers shown
Title
Reducing Gender Bias in Word-Level Language Models with a
  Gender-Equalizing Loss Function
Reducing Gender Bias in Word-Level Language Models with a Gender-Equalizing Loss Function
Yusu Qian
Urwa Muaz
Ben Zhang
J. Hyun
FaML
27
95
0
30 May 2019
Racial Bias in Hate Speech and Abusive Language Detection Datasets
Racial Bias in Hate Speech and Abusive Language Detection Datasets
Thomas Davidson
Debasmita Bhattacharya
Ingmar Weber
25
452
0
29 May 2019
SuperGLUE: A Stickier Benchmark for General-Purpose Language
  Understanding Systems
SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems
Alex Jinpeng Wang
Yada Pruksachatkun
Nikita Nangia
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
49
2,260
0
02 May 2019
Evaluating the Underlying Gender Bias in Contextualized Word Embeddings
Evaluating the Underlying Gender Bias in Contextualized Word Embeddings
Christine Basta
Marta R. Costa-jussá
Noe Casas
16
189
0
18 Apr 2019
Gender Bias in Contextualized Word Embeddings
Gender Bias in Contextualized Word Embeddings
Jieyu Zhao
Tianlu Wang
Mark Yatskar
Ryan Cotterell
Vicente Ordonez
Kai-Wei Chang
FaML
27
415
0
05 Apr 2019
Identifying and Reducing Gender Bias in Word-Level Language Models
Identifying and Reducing Gender Bias in Word-Level Language Models
Shikha Bordia
Samuel R. Bowman
FaML
48
322
0
05 Apr 2019
Black is to Criminal as Caucasian is to Police: Detecting and Removing
  Multiclass Bias in Word Embeddings
Black is to Criminal as Caucasian is to Police: Detecting and Removing Multiclass Bias in Word Embeddings
Thomas Manzini
Y. Lim
Yulia Tsvetkov
A. Black
FaML
9
304
0
03 Apr 2019
Measuring Societal Biases from Text Corpora with Smoothed First-Order
  Co-occurrence
Measuring Societal Biases from Text Corpora with Smoothed First-Order Co-occurrence
Navid Rekabsaz
Robert West
James Henderson
Allan Hanbury
13
0
0
13 Dec 2018
Previous
1234567