ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.07187
  4. Cited By
Semantics derived automatically from language corpora contain human-like
  biases
v1v2v3v4 (latest)

Semantics derived automatically from language corpora contain human-like biases

25 August 2016
Aylin Caliskan
J. Bryson
Arvind Narayanan
ArXiv (abs)PDFHTML

Papers citing "Semantics derived automatically from language corpora contain human-like biases"

50 / 512 papers shown
Title
RobBERT: a Dutch RoBERTa-based Language Model
RobBERT: a Dutch RoBERTa-based Language Model
Pieter Delobelle
Thomas Winters
Bettina Berendt
86
240
0
17 Jan 2020
Stereotypical Bias Removal for Hate Speech Detection Task using
  Knowledge-based Generalizations
Stereotypical Bias Removal for Hate Speech Detection Task using Knowledge-based Generalizations
Pinkesh Badjatiya
Manish Gupta
Vasudeva Varma
78
105
0
15 Jan 2020
Think Locally, Act Globally: Federated Learning with Local and Global
  Representations
Think Locally, Act Globally: Federated Learning with Local and Global Representations
Paul Pu Liang
Terrance Liu
Liu Ziyin
Nicholas B. Allen
Randy P. Auerbach
David Brent
Ruslan Salakhutdinov
Louis-Philippe Morency
FedML
124
569
0
06 Jan 2020
Measuring Non-Expert Comprehension of Machine Learning Fairness Metrics
Measuring Non-Expert Comprehension of Machine Learning Fairness Metrics
Debjani Saha
Candice Schumann
Duncan C. McElfresh
John P. Dickerson
Michelle L. Mazurek
Michael Carl Tschantz
FaML
65
17
0
17 Dec 2019
Artificial mental phenomena: Psychophysics as a framework to detect
  perception biases in AI models
Artificial mental phenomena: Psychophysics as a framework to detect perception biases in AI models
Lizhen Liang
Daniel Ernesto Acuna
62
14
0
15 Dec 2019
BERT has a Moral Compass: Improvements of ethical and moral values of
  machines
BERT has a Moral Compass: Improvements of ethical and moral values of machines
P. Schramowski
Cigdem Turan
Sophie F. Jentzsch
Constantin Rothkopf
Kristian Kersting
FaMLLM&Ro
65
30
0
11 Dec 2019
Measuring Social Bias in Knowledge Graph Embeddings
Measuring Social Bias in Knowledge Graph Embeddings
Joseph Fisher
Dave Palfrey
Christos Christodoulopoulos
Arpit Mittal
FaML
73
37
0
05 Dec 2019
Towards Fairness in Visual Recognition: Effective Strategies for Bias
  Mitigation
Towards Fairness in Visual Recognition: Effective Strategies for Bias Mitigation
Zeyu Wang
Klint Qinami
Yannis Karakozis
Kyle Genova
P. Nair
Kenji Hata
Olga Russakovsky
104
366
0
26 Nov 2019
A Causal Inference Method for Reducing Gender Bias in Word Embedding
  Relations
A Causal Inference Method for Reducing Gender Bias in Word Embedding Relations
Zekun Yang
Juan Feng
FaML
60
35
0
25 Nov 2019
Predictive Biases in Natural Language Processing Models: A Conceptual
  Framework and Overview
Predictive Biases in Natural Language Processing Models: A Conceptual Framework and Overview
Deven Santosh Shah
H. Andrew Schwartz
Dirk Hovy
AI4CE
132
261
0
09 Nov 2019
Towards Understanding Gender Bias in Relation Extraction
Towards Understanding Gender Bias in Relation Extraction
Andrew Gaut
Tony Sun
Shirlyn Tang
Yuxin Huang
Jing Qian
...
Jieyu Zhao
Diba Mirza
E. Belding
Kai-Wei Chang
William Yang Wang
FaML
92
41
0
09 Nov 2019
Advances in Machine Learning for the Behavioral Sciences
Advances in Machine Learning for the Behavioral Sciences
Tomáš Kliegr
Š. Bahník
Johannes Furnkranz
38
14
0
08 Nov 2019
Reducing Sentiment Bias in Language Models via Counterfactual Evaluation
Reducing Sentiment Bias in Language Models via Counterfactual Evaluation
Po-Sen Huang
Huan Zhang
Ray Jiang
Robert Stanforth
Johannes Welbl
Jack W. Rae
Vishal Maini
Dani Yogatama
Pushmeet Kohli
109
217
0
08 Nov 2019
Assessing Social and Intersectional Biases in Contextualized Word
  Representations
Assessing Social and Intersectional Biases in Contextualized Word Representations
Y. Tan
Elisa Celis
FaML
116
230
0
04 Nov 2019
Toward Gender-Inclusive Coreference Resolution
Toward Gender-Inclusive Coreference Resolution
Yang Trista Cao
Hal Daumé
100
145
0
30 Oct 2019
Context Matters: Recovering Human Semantic Structure from Machine
  Learning Analysis of Large-Scale Text Corpora
Context Matters: Recovering Human Semantic Structure from Machine Learning Analysis of Large-Scale Text Corpora
M. C. Iordan
Tyler Giallanza
C. Ellis
Nicole M. Beckage
Jonathan Cohen
35
10
0
15 Oct 2019
Constrained Non-Affine Alignment of Embeddings
Constrained Non-Affine Alignment of Embeddings
Yuwei Wang
Yan Zheng
Yanqing Peng
Chin-Chia Michael Yeh
Zhongfang Zhuang
Das Mahashweta
Bendre Mangesh
Feifei Li
Wei Zhang
J. M. Phillips
65
3
0
13 Oct 2019
Perturbation Sensitivity Analysis to Detect Unintended Model Biases
Perturbation Sensitivity Analysis to Detect Unintended Model Biases
Vinodkumar Prabhakaran
Ben Hutchinson
Margaret Mitchell
77
119
0
09 Oct 2019
Empirical Analysis of Multi-Task Learning for Reducing Model Bias in
  Toxic Comment Detection
Empirical Analysis of Multi-Task Learning for Reducing Model Bias in Toxic Comment Detection
Ameya Vaidya
Feng Mai
Yue Ning
169
21
0
21 Sep 2019
A General Framework for Implicit and Explicit Debiasing of
  Distributional Word Vector Spaces
A General Framework for Implicit and Explicit Debiasing of Distributional Word Vector Spaces
Anne Lauscher
Goran Glavaš
Simone Paolo Ponzetto
Ivan Vulić
80
64
0
13 Sep 2019
Investigating Sports Commentator Bias within a Large Corpus of American
  Football Broadcasts
Investigating Sports Commentator Bias within a Large Corpus of American Football Broadcasts
Jack Merullo
Luke Yeh
Abram Handler
Alvin Grissom II
Brendan O'Connor
Mohit Iyyer
66
19
0
07 Sep 2019
Examining Gender Bias in Languages with Grammatical Gender
Examining Gender Bias in Languages with Grammatical Gender
Pei Zhou
Weijia Shi
Jieyu Zhao
Kuan-Hao Huang
Muhao Chen
Ryan Cotterell
Kai-Wei Chang
82
106
0
05 Sep 2019
Avoiding Resentment Via Monotonic Fairness
Avoiding Resentment Via Monotonic Fairness
G. W. Cole
Sinead Williamson
FaML
99
7
0
03 Sep 2019
It's All in the Name: Mitigating Gender Bias with Name-Based
  Counterfactual Data Substitution
It's All in the Name: Mitigating Gender Bias with Name-Based Counterfactual Data Substitution
Rowan Hall Maudslay
Hila Gonen
Ryan Cotterell
Simone Teufel
79
172
0
02 Sep 2019
On Measuring and Mitigating Biased Inferences of Word Embeddings
On Measuring and Mitigating Biased Inferences of Word Embeddings
Sunipa Dev
Tao Li
J. M. Phillips
Vivek Srikumar
107
174
0
25 Aug 2019
Release Strategies and the Social Impacts of Language Models
Release Strategies and the Social Impacts of Language Models
Irene Solaiman
Miles Brundage
Jack Clark
Amanda Askell
Ariel Herbert-Voss
...
Miles McCain
Alex Newhouse
Jason Blazakis
Kris McGuffie
Jasmine Wang
128
634
0
24 Aug 2019
Gender Representation in French Broadcast Corpora and Its Impact on ASR
  Performance
Gender Representation in French Broadcast Corpora and Its Impact on ASR Performance
Mahault Garnerin
Solange Rossato
Laurent Besacier
53
52
0
23 Aug 2019
A Survey on Bias and Fairness in Machine Learning
A Survey on Bias and Fairness in Machine Learning
Ninareh Mehrabi
Fred Morstatter
N. Saxena
Kristina Lerman
Aram Galstyan
SyDaFaML
605
4,425
0
23 Aug 2019
Understanding Undesirable Word Embedding Associations
Understanding Undesirable Word Embedding Associations
Kawin Ethayarajh
David Duvenaud
Graeme Hirst
FaML
71
125
0
18 Aug 2019
Auditing News Curation Systems: A Case Study Examining Algorithmic and
  Editorial Logic in Apple News
Auditing News Curation Systems: A Case Study Examining Algorithmic and Editorial Logic in Apple News
Jack Bandy
N. Diakopoulos
MLAU
83
55
0
01 Aug 2019
Decoding the Style and Bias of Song Lyrics
Decoding the Style and Bias of Song Lyrics
M. Barman
Amit Awekar
Sambhav Kothari
25
6
0
17 Jul 2019
Training individually fair ML models with Sensitive Subspace Robustness
Training individually fair ML models with Sensitive Subspace Robustness
Mikhail Yurochkin
Amanda Bower
Yuekai Sun
FaMLOOD
88
120
0
28 Jun 2019
Good Secretaries, Bad Truck Drivers? Occupational Gender Stereotypes in
  Sentiment Analysis
Good Secretaries, Bad Truck Drivers? Occupational Gender Stereotypes in Sentiment Analysis
J. Bhaskaran
Isha Bhallamudi
66
47
0
24 Jun 2019
Language Modelling Makes Sense: Propagating Representations through
  WordNet for Full-Coverage Word Sense Disambiguation
Language Modelling Makes Sense: Propagating Representations through WordNet for Full-Coverage Word Sense Disambiguation
Daniel Loureiro
A. Jorge
85
138
0
24 Jun 2019
Mitigating Gender Bias in Natural Language Processing: Literature Review
Mitigating Gender Bias in Natural Language Processing: Literature Review
Tony Sun
Andrew Gaut
Shirlyn Tang
Yuxin Huang
Mai Elsherief
Jieyu Zhao
Diba Mirza
E. Belding-Royer
Kai-Wei Chang
William Yang Wang
AI4CE
138
563
0
21 Jun 2019
Considerations for the Interpretation of Bias Measures of Word
  Embeddings
Considerations for the Interpretation of Bias Measures of Word Embeddings
I. Mirzaev
Anthony Schulte
Michael D. Conover
Sam Shah
55
3
0
19 Jun 2019
Measuring Bias in Contextualized Word Representations
Measuring Bias in Contextualized Word Representations
Keita Kurita
Nidhi Vyas
Ayush Pareek
A. Black
Yulia Tsvetkov
121
454
0
18 Jun 2019
Conceptor Debiasing of Word Representations Evaluated on WEAT
Conceptor Debiasing of Word Representations Evaluated on WEAT
S. Karve
Lyle Ungar
João Sedoc
FaML
65
34
0
14 Jun 2019
Understanding artificial intelligence ethics and safety
Understanding artificial intelligence ethics and safety
David Leslie
FaMLAI4TS
74
363
0
11 Jun 2019
Training Temporal Word Embeddings with a Compass
Training Temporal Word Embeddings with a Compass
Valerio Di Carlo
Federico Bianchi
M. Palmonari
88
67
0
05 Jun 2019
Tracing Antisemitic Language Through Diachronic Embedding Projections:
  France 1789-1914
Tracing Antisemitic Language Through Diachronic Embedding Projections: France 1789-1914
Rocco Tripodi
M. Warglien
S. Sullam
Deborah Paci
LLMSV
42
21
0
04 Jun 2019
Gender-preserving Debiasing for Pre-trained Word Embeddings
Gender-preserving Debiasing for Pre-trained Word Embeddings
Masahiro Kaneko
Danushka Bollegala
FaML
72
131
0
03 Jun 2019
Can We Derive Explicit and Implicit Bias from Corpus?
Can We Derive Explicit and Implicit Bias from Corpus?
Bo Wang
Baixiang Xue
A. Greenwald
18
2
0
31 May 2019
Characterizing Bias in Classifiers using Generative Models
Characterizing Bias in Classifiers using Generative Models
Daniel J. McDuff
Shuang Ma
Yale Song
Ashish Kapoor
92
47
0
30 May 2019
Racial Bias in Hate Speech and Abusive Language Detection Datasets
Racial Bias in Hate Speech and Abusive Language Detection Datasets
Thomas Davidson
Debasmita Bhattacharya
Ingmar Weber
135
459
0
29 May 2019
Fair is Better than Sensational:Man is to Doctor as Woman is to Doctor
Fair is Better than Sensational:Man is to Doctor as Woman is to Doctor
Malvina Nissim
Rik van Noord
Rob van der Goot
FaML
90
103
0
23 May 2019
Proportionally Fair Clustering
Proportionally Fair Clustering
Xingyu Chen
Brandon Fain
Charles Lyu
Kamesh Munagala
FedMLFaML
93
144
0
09 May 2019
Distributional Semantics and Linguistic Theory
Distributional Semantics and Linguistic Theory
Gemma Boleda
108
203
0
06 May 2019
Fairness-Aware Ranking in Search & Recommendation Systems with
  Application to LinkedIn Talent Search
Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search
S. Geyik
Stuart Ambler
K. Kenthapadi
115
384
0
30 Apr 2019
Are We Consistently Biased? Multidimensional Analysis of Biases in
  Distributional Word Vectors
Are We Consistently Biased? Multidimensional Analysis of Biases in Distributional Word Vectors
Anne Lauscher
Goran Glavaš
96
55
0
26 Apr 2019
Previous
123...101189
Next