ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.07187
  4. Cited By
Semantics derived automatically from language corpora contain human-like
  biases
v1v2v3v4 (latest)

Semantics derived automatically from language corpora contain human-like biases

25 August 2016
Aylin Caliskan
J. Bryson
Arvind Narayanan
ArXiv (abs)PDFHTML

Papers citing "Semantics derived automatically from language corpora contain human-like biases"

50 / 513 papers shown
Title
Fairness in the Eyes of the Data: Certifying Machine-Learning Models
Fairness in the Eyes of the Data: Certifying Machine-Learning Models
Shahar Segal
Yossi Adi
Benny Pinkas
Carsten Baum
C. Ganesh
Joseph Keshet
FedML
70
37
0
03 Sep 2020
The Language Interpretability Tool: Extensible, Interactive
  Visualizations and Analysis for NLP Models
The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP Models
Ian Tenney
James Wexler
Jasmijn Bastings
Tolga Bolukbasi
Andy Coenen
...
Ellen Jiang
Mahima Pushkarna
Carey Radebaugh
Emily Reif
Ann Yuan
VLM
130
196
0
12 Aug 2020
Assessing Demographic Bias in Named Entity Recognition
Assessing Demographic Bias in Named Entity Recognition
Shubhanshu Mishra
Sijun He
Luca Belli
59
47
0
08 Aug 2020
Discovering and Categorising Language Biases in Reddit
Discovering and Categorising Language Biases in Reddit
Xavier Ferrer Aran
Tom van Nuenen
Jose Such
Natalia Criado
39
49
0
06 Aug 2020
Defining and Evaluating Fair Natural Language Generation
Defining and Evaluating Fair Natural Language Generation
C. Yeo
A. Chen
80
24
0
28 Jul 2020
Towards Debiasing Sentence Representations
Towards Debiasing Sentence Representations
Paul Pu Liang
Irene Li
Emily Zheng
Y. Lim
Ruslan Salakhutdinov
Louis-Philippe Morency
106
242
0
16 Jul 2020
Cultural Cartography with Word Embeddings
Cultural Cartography with Word Embeddings
Dustin S. Stoltz
Marshall A. Taylor
57
39
0
09 Jul 2020
OSCaR: Orthogonal Subspace Correction and Rectification of Biases in
  Word Embeddings
OSCaR: Orthogonal Subspace Correction and Rectification of Biases in Word Embeddings
Sunipa Dev
Tao Li
J. M. Phillips
Vivek Srikumar
119
55
0
30 Jun 2020
Adversarial Learning for Debiasing Knowledge Graph Embeddings
Adversarial Learning for Debiasing Knowledge Graph Embeddings
Mario Arduini
Lorenzo Noci
Federico Pirovano
Ce Zhang
Yash Raj Shrestha
B. Paudel
FaML
68
35
0
29 Jun 2020
MDR Cluster-Debias: A Nonlinear WordEmbedding Debiasing Pipeline
MDR Cluster-Debias: A Nonlinear WordEmbedding Debiasing Pipeline
Yuhao Du
K. Joseph
34
3
0
20 Jun 2020
Two Simple Ways to Learn Individual Fairness Metrics from Data
Two Simple Ways to Learn Individual Fairness Metrics from Data
Debarghya Mukherjee
Mikhail Yurochkin
Moulinath Banerjee
Yuekai Sun
FaML
92
97
0
19 Jun 2020
Group-Fair Online Allocation in Continuous Time
Group-Fair Online Allocation in Continuous Time
Semih Cayci
Swati Gupta
A. Eryilmaz
FaML
63
20
0
11 Jun 2020
Disparate Impact of Artificial Intelligence Bias in Ridehailing
  Economy's Price Discrimination Algorithms
Disparate Impact of Artificial Intelligence Bias in Ridehailing Economy's Price Discrimination Algorithms
Akshat Pandey
Aylin Caliskan
71
12
0
08 Jun 2020
Detecting Emergent Intersectional Biases: Contextualized Word Embeddings
  Contain a Distribution of Human-like Biases
Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases
W. Guo
Aylin Caliskan
59
245
0
06 Jun 2020
ValNorm Quantifies Semantics to Reveal Consistent Valence Biases Across
  Languages and Over Centuries
ValNorm Quantifies Semantics to Reveal Consistent Valence Biases Across Languages and Over Centuries
Autumn Toney
Aylin Caliskan
73
23
0
06 Jun 2020
Nurse is Closer to Woman than Surgeon? Mitigating Gender-Biased
  Proximities in Word Embeddings
Nurse is Closer to Woman than Surgeon? Mitigating Gender-Biased Proximities in Word Embeddings
Vaibhav Kumar
Tenzin Singhay Bhotia
Vaibhav Kumar
Tanmoy Chakraborty
CVBM
84
46
0
02 Jun 2020
Language (Technology) is Power: A Critical Survey of "Bias" in NLP
Language (Technology) is Power: A Critical Survey of "Bias" in NLP
Su Lin Blodgett
Solon Barocas
Hal Daumé
Hanna M. Wallach
159
1,261
0
28 May 2020
CausaLM: Causal Model Explanation Through Counterfactual Language Models
CausaLM: Causal Model Explanation Through Counterfactual Language Models
Amir Feder
Nadav Oved
Uri Shalit
Roi Reichart
CMLLRM
161
162
0
27 May 2020
Embeddings-Based Clustering for Target Specific Stances: The Case of a
  Polarized Turkey
Embeddings-Based Clustering for Target Specific Stances: The Case of a Polarized Turkey
Ammar Rashed
Mucahid Kutlu
Kareem Darwish
Tamer Elsayed
Cansin Bayrak
79
53
0
19 May 2020
(Re)construing Meaning in NLP
(Re)construing Meaning in NLP
Sean Trott
Tiago Timponi Torrent
Nancy Chang
Nathan Schneider
AI4CE
48
30
0
18 May 2020
Studying the Transfer of Biases from Programmers to Programs
Studying the Transfer of Biases from Programmers to Programs
Christian Johansen
Tore Pedersen
Johanna Johansen
16
7
0
17 May 2020
Mitigating Gender Bias in Machine Learning Data Sets
Mitigating Gender Bias in Machine Learning Data Sets
Susan Leavy
G. Meaney
Karen Wade
Derek Greene
FaML
52
37
0
14 May 2020
Personalized Chatbot Trustworthiness Ratings
Personalized Chatbot Trustworthiness Ratings
Biplav Srivastava
F. Rossi
Sheema Usmani
Mariana Bernagozzi
39
20
0
13 May 2020
Deep Learning for Political Science
Deep Learning for Political Science
Kakia Chatsiou
Slava Jankin
AI4CE
68
13
0
13 May 2020
Towards Robustifying NLI Models Against Lexical Dataset Biases
Towards Robustifying NLI Models Against Lexical Dataset Biases
Xiang Zhou
Joey Tianyi Zhou
64
58
0
10 May 2020
Cyberbullying Detection with Fairness Constraints
Cyberbullying Detection with Fairness Constraints
O. Gencoglu
93
49
0
09 May 2020
Spying on your neighbors: Fine-grained probing of contextual embeddings
  for information about surrounding words
Spying on your neighbors: Fine-grained probing of contextual embeddings for information about surrounding words
Josef Klafka
Allyson Ettinger
85
43
0
04 May 2020
On the Relationships Between the Grammatical Genders of Inanimate Nouns
  and Their Co-Occurring Adjectives and Verbs
On the Relationships Between the Grammatical Genders of Inanimate Nouns and Their Co-Occurring Adjectives and Verbs
Adina Williams
Ryan Cotterell
Lawrence Wolf-Sonkin
Damián E. Blasi
Hanna M. Wallach
82
19
0
03 May 2020
Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation
Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation
Tianlu Wang
Xi Lin
Nazneen Rajani
Bryan McCann
Vicente Ordonez
Caimng Xiong
CVBM
256
57
0
03 May 2020
Social Biases in NLP Models as Barriers for Persons with Disabilities
Social Biases in NLP Models as Barriers for Persons with Disabilities
Ben Hutchinson
Vinodkumar Prabhakaran
Emily L. Denton
Kellie Webster
Yu Zhong
Stephen Denuyl
83
314
0
02 May 2020
Multi-Dimensional Gender Bias Classification
Multi-Dimensional Gender Bias Classification
Emily Dinan
Angela Fan
Ledell Yu Wu
Jason Weston
Douwe Kiela
Adina Williams
FaML
88
124
0
01 May 2020
Do Neural Ranking Models Intensify Gender Bias?
Do Neural Ranking Models Intensify Gender Bias?
Navid Rekabsaz
Markus Schedl
65
58
0
01 May 2020
Beneath the Tip of the Iceberg: Current Challenges and New Directions in
  Sentiment Analysis Research
Beneath the Tip of the Iceberg: Current Challenges and New Directions in Sentiment Analysis Research
Soujanya Poria
Devamanyu Hazarika
Navonil Majumder
Rada Mihalcea
135
221
0
01 May 2020
Demographics Should Not Be the Reason of Toxicity: Mitigating
  Discrimination in Text Classifications with Instance Weighting
Demographics Should Not Be the Reason of Toxicity: Mitigating Discrimination in Text Classifications with Instance Weighting
Guanhua Zhang
Bing Bai
Junqi Zhang
Kun Bai
Conghui Zhu
Tiejun Zhao
103
71
0
29 Apr 2020
When do Word Embeddings Accurately Reflect Surveys on our Beliefs About
  People?
When do Word Embeddings Accurately Reflect Surveys on our Beliefs About People?
K. Joseph
Jonathan H. Morgan
55
27
0
25 Apr 2020
StereoSet: Measuring stereotypical bias in pretrained language models
StereoSet: Measuring stereotypical bias in pretrained language models
Moin Nadeem
Anna Bethke
Siva Reddy
103
1,027
0
20 Apr 2020
Automatically Characterizing Targeted Information Operations Through
  Biases Present in Discourse on Twitter
Automatically Characterizing Targeted Information Operations Through Biases Present in Discourse on Twitter
Autumn Toney
Akshat Pandey
W. Guo
David A. Broniatowski
Aylin Caliskan
47
3
0
18 Apr 2020
Unsupervised Discovery of Implicit Gender Bias
Unsupervised Discovery of Implicit Gender Bias
Anjalie Field
Yulia Tsvetkov
92
49
0
17 Apr 2020
REVISE: A Tool for Measuring and Mitigating Bias in Visual Datasets
REVISE: A Tool for Measuring and Mitigating Bias in Visual Datasets
Angelina Wang
Alexander Liu
Ryan Zhang
Anat Kleiman
Leslie Kim
Dora Zhao
Iroha Shirai
Arvind Narayanan
Olga Russakovsky
89
191
0
16 Apr 2020
Null It Out: Guarding Protected Attributes by Iterative Nullspace
  Projection
Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection
Shauli Ravfogel
Yanai Elazar
Hila Gonen
Michael Twiton
Yoav Goldberg
156
388
0
16 Apr 2020
Compass-aligned Distributional Embeddings for Studying Semantic
  Differences across Corpora
Compass-aligned Distributional Embeddings for Studying Semantic Differences across Corpora
Federico Bianchi
Valerio Di Carlo
P. Nicoli
M. Palmonari
40
7
0
13 Apr 2020
"You are grounded!": Latent Name Artifacts in Pre-trained Language
  Models
"You are grounded!": Latent Name Artifacts in Pre-trained Language Models
Vered Shwartz
Rachel Rudinger
Oyvind Tafjord
55
51
0
06 Apr 2020
Machine learning as a model for cultural learning: Teaching an algorithm
  what it means to be fat
Machine learning as a model for cultural learning: Teaching an algorithm what it means to be fat
Alina Arseniev-Koehler
J. Foster
84
49
0
24 Mar 2020
Joint Multiclass Debiasing of Word Embeddings
Joint Multiclass Debiasing of Word Embeddings
Radovan Popović
Florian Lemmerich
M. Strohmaier
FaML
76
6
0
09 Mar 2020
Deconfounded Image Captioning: A Causal Retrospect
Deconfounded Image Captioning: A Causal Retrospect
Xu Yang
Hanwang Zhang
Jianfei Cai
CML
79
126
0
09 Mar 2020
A Framework for the Computational Linguistic Analysis of Dehumanization
A Framework for the Computational Linguistic Analysis of Dehumanization
Julia Mendelsohn
Yulia Tsvetkov
Dan Jurafsky
160
94
0
06 Mar 2020
Fair Adversarial Networks
Fair Adversarial Networks
G. Cevora
46
4
0
23 Feb 2020
Measuring Social Biases in Grounded Vision and Language Embeddings
Measuring Social Biases in Grounded Vision and Language Embeddings
Candace Ross
Boris Katz
Andrei Barbu
103
65
0
20 Feb 2020
Word Embeddings Inherently Recover the Conceptual Organization of the
  Human Mind
Word Embeddings Inherently Recover the Conceptual Organization of the Human Mind
Victor Swift
21
0
0
06 Feb 2020
Do I Look Like a Criminal? Examining how Race Presentation Impacts Human
  Judgement of Recidivism
Do I Look Like a Criminal? Examining how Race Presentation Impacts Human Judgement of Recidivism
Keri Mallari
K. Quinn
Paul Johns
Sarah Tan
Divya Ramesh
Ece Kamar
FaML
68
30
0
04 Feb 2020
Previous
123...1011789
Next