ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.07187
  4. Cited By
Semantics derived automatically from language corpora contain human-like
  biases
v1v2v3v4 (latest)

Semantics derived automatically from language corpora contain human-like biases

25 August 2016
Aylin Caliskan
J. Bryson
Arvind Narayanan
ArXiv (abs)PDFHTML

Papers citing "Semantics derived automatically from language corpora contain human-like biases"

50 / 512 papers shown
Title
Collecting a Large-Scale Gender Bias Dataset for Coreference Resolution
  and Machine Translation
Collecting a Large-Scale Gender Bias Dataset for Coreference Resolution and Machine Translation
Shahar Levy
Koren Lazar
Gabriel Stanovsky
72
70
0
08 Sep 2021
Sustainable Modular Debiasing of Language Models
Sustainable Modular Debiasing of Language Models
Anne Lauscher
Tobias Lüken
Goran Glavaš
131
124
0
08 Sep 2021
Hi, my name is Martha: Using names to measure and mitigate bias in
  generative dialogue models
Hi, my name is Martha: Using names to measure and mitigate bias in generative dialogue models
Eric Michael Smith
Adina Williams
120
28
0
07 Sep 2021
Fair Representation: Guaranteeing Approximate Multiple Group Fairness
  for Unknown Tasks
Fair Representation: Guaranteeing Approximate Multiple Group Fairness for Unknown Tasks
Xudong Shen
Yongkang Wong
Mohan S. Kankanhalli
FaML
95
20
0
01 Sep 2021
Harms of Gender Exclusivity and Challenges in Non-Binary Representation
  in Language Technologies
Harms of Gender Exclusivity and Challenges in Non-Binary Representation in Language Technologies
Sunipa Dev
Masoud Monajatipoor
Anaelia Ovalle
Arjun Subramonian
J. M. Phillips
Kai-Wei Chang
141
176
0
27 Aug 2021
Social Norm Bias: Residual Harms of Fairness-Aware Algorithms
Social Norm Bias: Residual Harms of Fairness-Aware Algorithms
Myra Cheng
Maria De-Arteaga
Lester W. Mackey
Adam Tauman Kalai
FaML
98
9
0
25 Aug 2021
Diachronic Analysis of German Parliamentary Proceedings: Ideological
  Shifts through the Lens of Political Biases
Diachronic Analysis of German Parliamentary Proceedings: Ideological Shifts through the Lens of Political Biases
Tobias Walter
Celina Kirschner
Steffen Eger
Goran Glavaš
Anne Lauscher
Simone Paolo Ponzetto
70
20
0
13 Aug 2021
Retiring Adult: New Datasets for Fair Machine Learning
Retiring Adult: New Datasets for Fair Machine Learning
Frances Ding
Moritz Hardt
John Miller
Ludwig Schmidt
233
463
0
10 Aug 2021
On Measures of Biases and Harms in NLP
On Measures of Biases and Harms in NLP
Sunipa Dev
Emily Sheng
Jieyu Zhao
Aubrie Amstutz
Jiao Sun
...
M. Sanseverino
Jiin Kim
Akihiro Nishi
Nanyun Peng
Kai-Wei Chang
85
88
0
07 Aug 2021
Mitigating Dataset Harms Requires Stewardship: Lessons from 1000 Papers
Mitigating Dataset Harms Requires Stewardship: Lessons from 1000 Papers
Kenny Peng
Arunesh Mathur
Arvind Narayanan
195
97
0
06 Aug 2021
Evolution of emotion semantics
Evolution of emotion semantics
Aotao Xu
J. Stellar
Yang Xu
CVBM
34
14
0
05 Aug 2021
Spinning Sequence-to-Sequence Models with Meta-Backdoors
Eugene Bagdasaryan
Vitaly Shmatikov
SILMAAML
86
8
0
22 Jul 2021
Theoretical foundations and limits of word embeddings: what types of
  meaning can they capture?
Theoretical foundations and limits of word embeddings: what types of meaning can they capture?
Alina Arseniev-Koehler
68
21
0
22 Jul 2021
A Survey on Bias in Visual Datasets
A Survey on Bias in Visual Datasets
Simone Fabbrizzi
Symeon Papadopoulos
Eirini Ntoutsi
Y. Kompatsiaris
203
129
0
16 Jul 2021
MultiBench: Multiscale Benchmarks for Multimodal Representation Learning
MultiBench: Multiscale Benchmarks for Multimodal Representation Learning
Paul Pu Liang
Yiwei Lyu
Xiang Fan
Zetian Wu
Yun Cheng
...
Peter Wu
Michelle A. Lee
Yuke Zhu
Ruslan Salakhutdinov
Louis-Philippe Morency
VLM
111
172
0
15 Jul 2021
Auditing for Diversity using Representative Examples
Auditing for Diversity using Representative Examples
Vijay Keswani
L. E. Celis
69
3
0
15 Jul 2021
On the Interaction of Belief Bias and Explanations
On the Interaction of Belief Bias and Explanations
Ana Valeria González
Anna Rogers
Anders Søgaard
FAtt
80
19
0
29 Jun 2021
Quantifying Social Biases in NLP: A Generalization and Empirical
  Comparison of Extrinsic Fairness Metrics
Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics
Paula Czarnowska
Yogarshi Vyas
Kashif Shah
87
112
0
28 Jun 2021
A Source-Criticism Debiasing Method for GloVe Embeddings
A Source-Criticism Debiasing Method for GloVe Embeddings
Hope McGovern
32
2
0
25 Jun 2021
Towards Understanding and Mitigating Social Biases in Language Models
Towards Understanding and Mitigating Social Biases in Language Models
Paul Pu Liang
Chiyu Wu
Louis-Philippe Morency
Ruslan Salakhutdinov
102
399
0
24 Jun 2021
A Survey of Race, Racism, and Anti-Racism in NLP
A Survey of Race, Racism, and Anti-Racism in NLP
Anjalie Field
Su Lin Blodgett
Zeerak Talat
Yulia Tsvetkov
97
124
0
21 Jun 2021
Stratified Learning: A General-Purpose Statistical Method for Improved
  Learning under Covariate Shift
Stratified Learning: A General-Purpose Statistical Method for Improved Learning under Covariate Shift
Maximilian Autenrieth
David van Dyk
R. Trotta
D. Stenning
OOD
25
3
0
21 Jun 2021
Understanding and Evaluating Racial Biases in Image Captioning
Understanding and Evaluating Racial Biases in Image Captioning
Dora Zhao
Angelina Wang
Olga Russakovsky
71
138
0
16 Jun 2021
RedditBias: A Real-World Resource for Bias Evaluation and Debiasing of
  Conversational Language Models
RedditBias: A Real-World Resource for Bias Evaluation and Debiasing of Conversational Language Models
Soumya Barikeri
Anne Lauscher
Ivan Vulić
Goran Glavaš
100
184
0
07 Jun 2021
Understanding and Countering Stereotypes: A Computational Approach to
  the Stereotype Content Model
Understanding and Countering Stereotypes: A Computational Approach to the Stereotype Content Model
Kathleen C. Fraser
I. Nejadgholi
S. Kiritchenko
74
41
0
04 Jun 2021
Alexa, Google, Siri: What are Your Pronouns? Gender and Anthropomorphism
  in the Design and Perception of Conversational Assistants
Alexa, Google, Siri: What are Your Pronouns? Gender and Anthropomorphism in the Design and Perception of Conversational Assistants
Gavin Abercrombie
Amanda Cercas Curry
Mugdha Pandya
Verena Rieser
81
54
0
04 Jun 2021
How to Adapt Your Pretrained Multilingual Model to 1600 Languages
How to Adapt Your Pretrained Multilingual Model to 1600 Languages
Abteen Ebrahimi
Katharina Kann
LRMVLM
98
70
0
03 Jun 2021
Men Are Elected, Women Are Married: Events Gender Bias on Wikipedia
Men Are Elected, Women Are Married: Events Gender Bias on Wikipedia
Jiao Sun
Nanyun Peng
66
48
0
03 Jun 2021
Gender Bias Hidden Behind Chinese Word Embeddings: The Case of Chinese
  Adjectives
Gender Bias Hidden Behind Chinese Word Embeddings: The Case of Chinese Adjectives
Meichun Jiao
Ziyang Luo
63
9
0
01 Jun 2021
CogView: Mastering Text-to-Image Generation via Transformers
CogView: Mastering Text-to-Image Generation via Transformers
Ming Ding
Zhuoyi Yang
Wenyi Hong
Wendi Zheng
Chang Zhou
...
Junyang Lin
Xu Zou
Zhou Shao
Hongxia Yang
Jie Tang
ViTVLM
155
784
0
26 May 2021
Bias in Machine Learning Software: Why? How? What to do?
Bias in Machine Learning Software: Why? How? What to do?
Joymallya Chakraborty
Suvodeep Majumder
Tim Menzies
FaML
93
206
0
25 May 2021
Dynaboard: An Evaluation-As-A-Service Platform for Holistic
  Next-Generation Benchmarking
Dynaboard: An Evaluation-As-A-Service Platform for Holistic Next-Generation Benchmarking
Zhiyi Ma
Kawin Ethayarajh
Tristan Thrush
Somya Jain
Ledell Yu Wu
Robin Jia
Christopher Potts
Adina Williams
Douwe Kiela
ELM
115
59
0
21 May 2021
Obstructing Classification via Projection
Obstructing Classification via Projection
P. Haghighatkhah
Wouter Meulemans
Bettina Speckmann
Jérôme Urhausen
Kevin Verbeek
49
6
0
19 May 2021
A Deep Metric Learning Approach to Account Linking
A Deep Metric Learning Approach to Account Linking
Aleem Khan
Elizabeth Fleming
N. Schofield
M. Bishop
Nicholas Andrews
59
23
0
15 May 2021
How Reliable are Model Diagnostics?
How Reliable are Model Diagnostics?
V. Aribandi
Yi Tay
Donald Metzler
103
19
0
12 May 2021
Evaluating Gender Bias in Natural Language Inference
Evaluating Gender Bias in Natural Language Inference
Shanya Sharma
Manan Dey
Koustuv Sinha
81
41
0
12 May 2021
Addressing "Documentation Debt" in Machine Learning Research: A
  Retrospective Datasheet for BookCorpus
Addressing "Documentation Debt" in Machine Learning Research: A Retrospective Datasheet for BookCorpus
Jack Bandy
Nicholas Vincent
69
57
0
11 May 2021
What's in the Box? A Preliminary Analysis of Undesirable Content in the
  Common Crawl Corpus
What's in the Box? A Preliminary Analysis of Undesirable Content in the Common Crawl Corpus
A. Luccioni
J. Viviano
102
119
0
06 May 2021
Societal Biases in Retrieved Contents: Measurement Framework and
  Adversarial Mitigation for BERT Rankers
Societal Biases in Retrieved Contents: Measurement Framework and Adversarial Mitigation for BERT Rankers
Navid Rekabsaz
Simone Kopeinik
Markus Schedl
83
63
0
28 Apr 2021
Fair Representation Learning for Heterogeneous Information Networks
Fair Representation Learning for Heterogeneous Information Networks
Huiping Zhuang
Rashidul Islam
Kamrun Naher Keya
James R. Foulds
Yangqiu Song
Shimei Pan
57
41
0
18 Apr 2021
Worst of Both Worlds: Biases Compound in Pre-trained Vision-and-Language
  Models
Worst of Both Worlds: Biases Compound in Pre-trained Vision-and-Language Models
Tejas Srinivasan
Yonatan Bisk
VLM
83
56
0
18 Apr 2021
SummVis: Interactive Visual Analysis of Models, Data, and Evaluation for
  Text Summarization
SummVis: Interactive Visual Analysis of Models, Data, and Evaluation for Text Summarization
Jesse Vig
Wojciech Kry'sciñski
Karan Goel
Nazneen Rajani
69
22
0
15 Apr 2021
Unmasking the Mask -- Evaluating Social Biases in Masked Language Models
Unmasking the Mask -- Evaluating Social Biases in Masked Language Models
Masahiro Kaneko
Danushka Bollegala
66
72
0
15 Apr 2021
On the Interpretability and Significance of Bias Metrics in Texts: a
  PMI-based Approach
On the Interpretability and Significance of Bias Metrics in Texts: a PMI-based Approach
Francisco Valentini
Germán Rosati
Damián E. Blasi
D. Slezak
Edgar Altszyler
39
3
0
13 Apr 2021
Gender Bias in Machine Translation
Gender Bias in Machine Translation
Beatrice Savoldi
Marco Gaido
L. Bentivogli
Matteo Negri
Marco Turchi
198
209
0
13 Apr 2021
Semantic maps and metrics for science Semantic maps and metrics for
  science using deep transformer encoders
Semantic maps and metrics for science Semantic maps and metrics for science using deep transformer encoders
Brendan Chambers
James A. Evans
MedIm
46
0
0
13 Apr 2021
VERB: Visualizing and Interpreting Bias Mitigation Techniques for Word
  Representations
VERB: Visualizing and Interpreting Bias Mitigation Techniques for Word Representations
Archit Rathore
Sunipa Dev
J. M. Phillips
Vivek Srikumar
Yan Zheng
Chin-Chia Michael Yeh
Junpeng Wang
Wei Zhang
Bei Wang
80
11
0
06 Apr 2021
Quantifying Bias in Automatic Speech Recognition
Quantifying Bias in Automatic Speech Recognition
Siyuan Feng
O. Kudina
B. Halpern
O. Scharenborg
63
87
0
28 Mar 2021
FairFil: Contrastive Neural Debiasing Method for Pretrained Text
  Encoders
FairFil: Contrastive Neural Debiasing Method for Pretrained Text Encoders
Pengyu Cheng
Weituo Hao
Siyang Yuan
Shijing Si
Lawrence Carin
77
105
0
11 Mar 2021
Large Pre-trained Language Models Contain Human-like Biases of What is
  Right and Wrong to Do
Large Pre-trained Language Models Contain Human-like Biases of What is Right and Wrong to Do
P. Schramowski
Cigdem Turan
Nico Andersen
Constantin Rothkopf
Kristian Kersting
120
298
0
08 Mar 2021
Previous
123...567...91011
Next