ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.07187
  4. Cited By
Semantics derived automatically from language corpora contain human-like
  biases
v1v2v3v4 (latest)

Semantics derived automatically from language corpora contain human-like biases

25 August 2016
Aylin Caliskan
J. Bryson
Arvind Narayanan
ArXiv (abs)PDFHTML

Papers citing "Semantics derived automatically from language corpora contain human-like biases"

50 / 512 papers shown
Title
Systematic Rectification of Language Models via Dead-end Analysis
Systematic Rectification of Language Models via Dead-end Analysis
Mengyao Cao
Mehdi Fatemi
Jackie C.K. Cheung
Samira Shabanian
KELM
73
16
0
27 Feb 2023
In-Depth Look at Word Filling Societal Bias Measures
In-Depth Look at Word Filling Societal Bias Measures
Matúš Pikuliak
Ivana Benová
Viktor Bachratý
97
9
0
24 Feb 2023
Fairness in Language Models Beyond English: Gaps and Challenges
Fairness in Language Models Beyond English: Gaps and Challenges
Krithika Ramesh
Sunayana Sitaram
Monojit Choudhury
78
23
0
24 Feb 2023
FineDeb: A Debiasing Framework for Language Models
FineDeb: A Debiasing Framework for Language Models
Akash Saravanan
Dhruv Mullick
Habibur Rahman
Nidhi Hegde
FedMLAI4CE
83
4
0
05 Feb 2023
Less, but Stronger: On the Value of Strong Heuristics in Semi-supervised
  Learning for Software Analytics
Less, but Stronger: On the Value of Strong Heuristics in Semi-supervised Learning for Software Analytics
Huy Tu
Tim Menzies
43
0
0
03 Feb 2023
Co-Writing with Opinionated Language Models Affects Users' Views
Co-Writing with Opinionated Language Models Affects Users' Views
Maurice Jakesch
Advait Bhat
Daniel Buschek
Lior Zalmanson
Mor Naaman
ELM
100
229
0
01 Feb 2023
Vision-Language Models Performing Zero-Shot Tasks Exhibit Gender-based
  Disparities
Vision-Language Models Performing Zero-Shot Tasks Exhibit Gender-based Disparities
Melissa Hall
Laura Gustafson
Aaron B. Adcock
Ishan Misra
Candace Ross
VLM
101
24
0
26 Jan 2023
Dissociating language and thought in large language models
Dissociating language and thought in large language models
Kyle Mahowald
Anna A. Ivanova
I. Blank
Nancy Kanwisher
J. Tenenbaum
Evelina Fedorenko
ELMReLM
111
215
0
16 Jan 2023
Improving Human-AI Collaboration With Descriptions of AI Behavior
Improving Human-AI Collaboration With Descriptions of AI Behavior
Ángel Alexander Cabrera
Adam Perer
Jason I. Hong
84
40
0
06 Jan 2023
Contrastive Language-Vision AI Models Pretrained on Web-Scraped
  Multimodal Data Exhibit Sexual Objectification Bias
Contrastive Language-Vision AI Models Pretrained on Web-Scraped Multimodal Data Exhibit Sexual Objectification Bias
Robert Wolfe
Yiwei Yang
Billy Howe
Aylin Caliskan
DiffM
131
57
0
21 Dec 2022
SERENGETI: Massively Multilingual Language Models for Africa
SERENGETI: Massively Multilingual Language Models for Africa
Ife Adebara
AbdelRahim Elmadany
Muhammad Abdul-Mageed
Alcides Alcoba Inciarte
76
33
0
21 Dec 2022
Trustworthy Social Bias Measurement
Trustworthy Social Bias Measurement
Rishi Bommasani
Percy Liang
74
11
0
20 Dec 2022
Human-Guided Fair Classification for Natural Language Processing
Human-Guided Fair Classification for Natural Language Processing
Florian E.Dorner
Momchil Peychev
Nikola Konstantinov
Naman Goel
Elliott Ash
Martin Vechev
FaML
80
4
0
20 Dec 2022
On Second Thought, Let's Not Think Step by Step! Bias and Toxicity in
  Zero-Shot Reasoning
On Second Thought, Let's Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning
Omar Shaikh
Hongxin Zhang
William B. Held
Michael S. Bernstein
Diyi Yang
ReLMLRM
162
200
0
15 Dec 2022
The effects of gender bias in word embeddings on depression prediction
The effects of gender bias in word embeddings on depression prediction
Gizem Sogancioglu
Heysem Kaya
58
3
0
15 Dec 2022
Unsupervised Detection of Contextualized Embedding Bias with Application
  to Ideology
Unsupervised Detection of Contextualized Embedding Bias with Application to Ideology
Valentin Hofmann
J. Pierrehumbert
Hinrich Schütze
109
0
0
14 Dec 2022
Paraphrase Identification with Deep Learning: A Review of Datasets and
  Methods
Paraphrase Identification with Deep Learning: A Review of Datasets and Methods
Chao Zhou
Cheng Qiu
Daniel Ernesto Acuna
127
26
0
13 Dec 2022
Better Hit the Nail on the Head than Beat around the Bush: Removing
  Protected Attributes with a Single Projection
Better Hit the Nail on the Head than Beat around the Bush: Removing Protected Attributes with a Single Projection
P. Haghighatkhah
Antske Fokkens
Pia Sommerauer
Bettina Speckmann
Kevin Verbeek
83
14
0
08 Dec 2022
SODA: A Natural Language Processing Package to Extract Social
  Determinants of Health for Cancer Studies
SODA: A Natural Language Processing Package to Extract Social Determinants of Health for Cancer Studies
Zehao Yu
Xi Yang
Chong Dang
P. Adekkanattu
Braja Gopal Patra
...
T. George
W. Hogan
Yi Guo
Jiang Bian
Yonghui Wu
31
15
0
06 Dec 2022
Gender Biases Unexpectedly Fluctuate in the Pre-training Stage of Masked
  Language Models
Gender Biases Unexpectedly Fluctuate in the Pre-training Stage of Masked Language Models
Kenan Tang
Hanchun Jiang
AI4CE
93
1
0
26 Nov 2022
Undesirable Biases in NLP: Addressing Challenges of Measurement
Undesirable Biases in NLP: Addressing Challenges of Measurement
Oskar van der Wal
Dominik Bachmann
Alina Leidinger
L. Maanen
Willem H. Zuidema
K. Schulz
86
7
0
24 Nov 2022
Mind Your Bias: A Critical Review of Bias Detection Methods for
  Contextual Language Models
Mind Your Bias: A Critical Review of Bias Detection Methods for Contextual Language Models
Silke Husse
Andreas Spitz
66
6
0
15 Nov 2022
Does Debiasing Inevitably Degrade the Model Performance
Does Debiasing Inevitably Degrade the Model Performance
Yiran Liu
Xiao-Yang Liu
Haotian Chen
Yang Yu
74
2
0
14 Nov 2022
ADEPT: A DEbiasing PrompT Framework
ADEPT: A DEbiasing PrompT Framework
Ke Yang
Charles Yu
Yi R. Fung
Manling Li
Heng Ji
124
27
0
10 Nov 2022
Safe Latent Diffusion: Mitigating Inappropriate Degeneration in
  Diffusion Models
Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models
P. Schramowski
Manuel Brack
Bjorn Deiseroth
Kristian Kersting
157
312
0
09 Nov 2022
No Word Embedding Model Is Perfect: Evaluating the Representation
  Accuracy for Social Bias in the Media
No Word Embedding Model Is Perfect: Evaluating the Representation Accuracy for Social Bias in the Media
Maximilian Spliethover
Maximilian Keiff
Henning Wachsmuth
65
5
0
07 Nov 2022
MABEL: Attenuating Gender Bias using Textual Entailment Data
MABEL: Attenuating Gender Bias using Textual Entailment Data
Jacqueline He
Mengzhou Xia
C. Fellbaum
Danqi Chen
54
32
0
26 Oct 2022
Choose Your Lenses: Flaws in Gender Bias Evaluation
Choose Your Lenses: Flaws in Gender Bias Evaluation
Hadas Orgad
Yonatan Belinkov
80
37
0
20 Oct 2022
The Tail Wagging the Dog: Dataset Construction Biases of Social Bias
  Benchmarks
The Tail Wagging the Dog: Dataset Construction Biases of Social Bias Benchmarks
Nikil Selvam
Sunipa Dev
Daniel Khashabi
Tushar Khot
Kai-Wei Chang
ALM
74
26
0
18 Oct 2022
Log-linear Guardedness and its Implications
Log-linear Guardedness and its Implications
Shauli Ravfogel
Yoav Goldberg
Ryan Cotterell
133
2
0
18 Oct 2022
Social Biases in Automatic Evaluation Metrics for NLG
Social Biases in Automatic Evaluation Metrics for NLG
Mingqi Gao
Xiaojun Wan
50
3
0
17 Oct 2022
Controlling Bias Exposure for Fair Interpretable Predictions
Controlling Bias Exposure for Fair Interpretable Predictions
Zexue He
Yu Wang
Julian McAuley
Bodhisattwa Prasad Majumder
58
19
0
14 Oct 2022
SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense
  Reasoning Models
SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense Reasoning Models
Haozhe An
Zongxia Li
Jieyu Zhao
Rachel Rudinger
79
26
0
13 Oct 2022
Social-Group-Agnostic Word Embedding Debiasing via the Stereotype
  Content Model
Social-Group-Agnostic Word Embedding Debiasing via the Stereotype Content Model
Ali Omrani
Brendan Kennedy
M. Atari
Morteza Dehghani
41
1
0
11 Oct 2022
Who Wrote this? How Smart Replies Impact Language and Agency in the
  Workplace
Who Wrote this? How Smart Replies Impact Language and Agency in the Workplace
Kilian Wenker
65
7
0
07 Oct 2022
Prompt Compression and Contrastive Conditioning for Controllability and
  Toxicity Reduction in Language Models
Prompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language Models
David Wingate
Mohammad Shoeybi
Taylor Sorensen
91
77
0
06 Oct 2022
Re-contextualizing Fairness in NLP: The Case of India
Re-contextualizing Fairness in NLP: The Case of India
Shaily Bhatt
Sunipa Dev
Partha P. Talukdar
Shachi Dave
Vinodkumar Prabhakaran
101
61
0
25 Sep 2022
Closing the Gender Wage Gap: Adversarial Fairness in Job Recommendation
Closing the Gender Wage Gap: Adversarial Fairness in Job Recommendation
Clara Rus
Jeffrey Luppes
Harrie Oosterhuis
Gido Schoenmacker
FaML
83
12
0
20 Sep 2022
Mitigating Representation Bias in Action Recognition: Algorithms and
  Benchmarks
Mitigating Representation Bias in Action Recognition: Algorithms and Benchmarks
Haodong Duan
Yue Zhao
Kai-xiang Chen
Yu Xiong
Dahua Lin
34
7
0
20 Sep 2022
Exploiting Cultural Biases via Homoglyphs in Text-to-Image Synthesis
Exploiting Cultural Biases via Homoglyphs in Text-to-Image Synthesis
Lukas Struppek
Dominik Hintersdorf
Felix Friedrich
Manuel Brack
P. Schramowski
Kristian Kersting
130
33
0
19 Sep 2022
Out of One, Many: Using Language Models to Simulate Human Samples
Out of One, Many: Using Language Models to Simulate Human Samples
Lisa P. Argyle
Ethan C. Busby
Nancy Fulda
Joshua R Gubler
Christopher Rytting
David Wingate
SyDa
105
615
0
14 Sep 2022
Generating Coherent Drum Accompaniment With Fills And Improvisations
Generating Coherent Drum Accompaniment With Fills And Improvisations
Rishabh A. Dahale
Vaibhav Talwadker
Preeti Rao
Prateek Verma
66
3
0
01 Sep 2022
Debiasing Word Embeddings with Nonlinear Geometry
Debiasing Word Embeddings with Nonlinear Geometry
Lu Cheng
Nayoung Kim
Huan Liu
61
5
0
29 Aug 2022
Visual Comparison of Language Model Adaptation
Visual Comparison of Language Model Adaptation
Rita Sevastjanova
E. Cakmak
Shauli Ravfogel
Ryan Cotterell
Mennatallah El-Assady
VLM
94
16
0
17 Aug 2022
Debiasing Gender Bias in Information Retrieval Models
Debiasing Gender Bias in Information Retrieval Models
Dhanasekar Sundararaman
Vivek Subramanian
47
2
0
02 Aug 2022
A Multibias-mitigated and Sentiment Knowledge Enriched Transformer for
  Debiasing in Multimodal Conversational Emotion Recognition
A Multibias-mitigated and Sentiment Knowledge Enriched Transformer for Debiasing in Multimodal Conversational Emotion Recognition
Jinglin Wang
Fang Ma
Yazhou Zhang
Dawei Song
39
4
0
17 Jul 2022
A methodology to characterize bias and harmful stereotypes in natural
  language processing in Latin America
A methodology to characterize bias and harmful stereotypes in natural language processing in Latin America
Laura Alonso Alemany
Luciana Benotti
Hernán Maina
Lucía González
Mariela Rajngewerc
...
Guido Ivetta
Alexia Halvorsen
Amanda Rojo
M. Bordone
Beatriz Busaniche
70
3
0
14 Jul 2022
Towards A Holistic View of Bias in Machine Learning: Bridging
  Algorithmic Fairness and Imbalanced Learning
Towards A Holistic View of Bias in Machine Learning: Bridging Algorithmic Fairness and Imbalanced Learning
Damien Dablain
Bartosz Krawczyk
Nitesh Chawla
FaML
69
23
0
13 Jul 2022
FairDistillation: Mitigating Stereotyping in Language Models
FairDistillation: Mitigating Stereotyping in Language Models
Pieter Delobelle
Bettina Berendt
71
8
0
10 Jul 2022
A Comprehensive Empirical Study of Bias Mitigation Methods for Machine
  Learning Classifiers
A Comprehensive Empirical Study of Bias Mitigation Methods for Machine Learning Classifiers
Zhenpeng Chen
Jie M. Zhang
Federica Sarro
Mark Harman
FaML
74
74
0
07 Jul 2022
Previous
123456...91011
Next