ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.07187
  4. Cited By
Semantics derived automatically from language corpora contain human-like
  biases
v1v2v3v4 (latest)

Semantics derived automatically from language corpora contain human-like biases

25 August 2016
Aylin Caliskan
J. Bryson
Arvind Narayanan
ArXiv (abs)PDFHTML

Papers citing "Semantics derived automatically from language corpora contain human-like biases"

50 / 518 papers shown
Title
Semantic maps and metrics for science Semantic maps and metrics for
  science using deep transformer encoders
Semantic maps and metrics for science Semantic maps and metrics for science using deep transformer encoders
Brendan Chambers
James A. Evans
MedIm
46
0
0
13 Apr 2021
VERB: Visualizing and Interpreting Bias Mitigation Techniques for Word
  Representations
VERB: Visualizing and Interpreting Bias Mitigation Techniques for Word Representations
Archit Rathore
Sunipa Dev
J. M. Phillips
Vivek Srikumar
Yan Zheng
Chin-Chia Michael Yeh
Junpeng Wang
Wei Zhang
Bei Wang
80
11
0
06 Apr 2021
Quantifying Bias in Automatic Speech Recognition
Quantifying Bias in Automatic Speech Recognition
Siyuan Feng
O. Kudina
B. Halpern
O. Scharenborg
63
87
0
28 Mar 2021
FairFil: Contrastive Neural Debiasing Method for Pretrained Text
  Encoders
FairFil: Contrastive Neural Debiasing Method for Pretrained Text Encoders
Pengyu Cheng
Weituo Hao
Siyang Yuan
Shijing Si
Lawrence Carin
77
105
0
11 Mar 2021
Large Pre-trained Language Models Contain Human-like Biases of What is
  Right and Wrong to Do
Large Pre-trained Language Models Contain Human-like Biases of What is Right and Wrong to Do
P. Schramowski
Cigdem Turan
Nico Andersen
Constantin Rothkopf
Kristian Kersting
120
298
0
08 Mar 2021
Counterfactuals and Causability in Explainable Artificial Intelligence:
  Theory, Algorithms, and Applications
Counterfactuals and Causability in Explainable Artificial Intelligence: Theory, Algorithms, and Applications
Yu-Liang Chou
Catarina Moreira
P. Bruza
Chun Ouyang
Joaquim A. Jorge
CML
165
179
0
07 Mar 2021
WordBias: An Interactive Visual Tool for Discovering Intersectional
  Biases Encoded in Word Embeddings
WordBias: An Interactive Visual Tool for Discovering Intersectional Biases Encoded in Word Embeddings
Bhavya Ghai
Md. Naimul Hoque
Klaus Mueller
84
26
0
05 Mar 2021
Measuring Model Biases in the Absence of Ground Truth
Measuring Model Biases in the Absence of Ground Truth
Osman Aka
Ken Burke
Alex Bauerle
Christina Greer
Margaret Mitchell
91
35
0
05 Mar 2021
Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based
  Bias in NLP
Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP
Timo Schick
Sahana Udupa
Hinrich Schütze
321
389
0
28 Feb 2021
Directional Bias Amplification
Directional Bias Amplification
Angelina Wang
Olga Russakovsky
79
70
0
24 Feb 2021
Automated Evaluation Of Psychotherapy Skills Using Speech And Language
  Technologies
Automated Evaluation Of Psychotherapy Skills Using Speech And Language Technologies
Nikolaos Flemotomos
Víctor R. Martínez
Zhuohao Chen
Karan Singla
V. Ardulov
...
S. P. Lord
Tad Hirsch
Zac E. Imel
David C. Atkins
Shrikanth Narayanan
53
47
0
22 Feb 2021
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A
  Stakeholder Perspective on XAI and a Conceptual Model Guiding
  Interdisciplinary XAI Research
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research
Markus Langer
Daniel Oster
Timo Speith
Holger Hermanns
Lena Kästner
Eva Schmidt
Andreas Sesing
Kevin Baum
XAI
125
432
0
15 Feb 2021
Bias Out-of-the-Box: An Empirical Analysis of Intersectional
  Occupational Biases in Popular Generative Language Models
Bias Out-of-the-Box: An Empirical Analysis of Intersectional Occupational Biases in Popular Generative Language Models
Hannah Rose Kirk
Yennie Jun
Haider Iqbal
Elias Benussi
Filippo Volpin
F. Dreyer
Aleksandar Shtedritski
Yuki M. Asano
68
194
0
08 Feb 2021
Symbolic Behaviour in Artificial Intelligence
Symbolic Behaviour in Artificial Intelligence
Adam Santoro
Andrew Kyle Lampinen
Kory W. Mathewson
Timothy Lillicrap
David Raposo
79
34
0
05 Feb 2021
Detecting discriminatory risk through data annotation based on Bayesian
  inferences
Detecting discriminatory risk through data annotation based on Bayesian inferences
E. Beretta
A. Vetrò
Bruno Lepri
Juan Carlos De Martin
70
21
0
27 Jan 2021
Low-skilled Occupations Face the Highest Upskilling Pressure
Low-skilled Occupations Face the Highest Upskilling Pressure
Di Tong
Lingfei Wu
James Allen Evans
97
2
0
27 Jan 2021
Stereotype and Skew: Quantifying Gender Bias in Pre-trained and
  Fine-tuned Language Models
Stereotype and Skew: Quantifying Gender Bias in Pre-trained and Fine-tuned Language Models
Daniel de Vassimon Manela
D. Errington
Thomas Fisher
B. V. Breugel
Pasquale Minervini
54
96
0
24 Jan 2021
Dictionary-based Debiasing of Pre-trained Word Embeddings
Dictionary-based Debiasing of Pre-trained Word Embeddings
Masahiro Kaneko
Danushka Bollegala
FaML
97
38
0
23 Jan 2021
Debiasing Pre-trained Contextualised Embeddings
Debiasing Pre-trained Contextualised Embeddings
Masahiro Kaneko
Danushka Bollegala
269
143
0
23 Jan 2021
Censorship of Online Encyclopedias: Implications for NLP Models
Censorship of Online Encyclopedias: Implications for NLP Models
Eddie Yang
Margaret E. Roberts
43
16
0
22 Jan 2021
Intrinsic Bias Metrics Do Not Correlate with Application Bias
Intrinsic Bias Metrics Do Not Correlate with Application Bias
Seraphina Goldfarb-Tarrant
Rebecca Marchant
Ricardo Muñoz Sánchez
Mugdha Pandya
Adam Lopez
157
180
0
31 Dec 2020
Fairness in Machine Learning
Fairness in Machine Learning
L. Oneto
Silvia Chiappa
FaML
324
500
0
31 Dec 2020
Confronting Abusive Language Online: A Survey from the Ethical and Human
  Rights Perspective
Confronting Abusive Language Online: A Survey from the Ethical and Human Rights Perspective
S. Kiritchenko
I. Nejadgholi
Kathleen C. Fraser
AILaw
113
89
0
22 Dec 2020
Fairkit, Fairkit, on the Wall, Who's the Fairest of Them All? Supporting
  Data Scientists in Training Fair Models
Fairkit, Fairkit, on the Wall, Who's the Fairest of Them All? Supporting Data Scientists in Training Fair Models
Brittany Johnson
Jesse Bartola
Rico Angell
Katherine Keith
Sam Witty
S. Giguere
Yuriy Brun
FaML
145
18
0
17 Dec 2020
Towards Neural Programming Interfaces
Towards Neural Programming Interfaces
Zachary Brown
Nathaniel R. Robinson
David Wingate
Nancy Fulda
AI4CE
116
5
0
10 Dec 2020
Data and its (dis)contents: A survey of dataset development and use in
  machine learning research
Data and its (dis)contents: A survey of dataset development and use in machine learning research
Amandalynne Paullada
Inioluwa Deborah Raji
Emily M. Bender
Emily L. Denton
A. Hanna
133
532
0
09 Dec 2020
The Geometry of Distributed Representations for Better Alignment,
  Attenuated Bias, and Improved Interpretability
The Geometry of Distributed Representations for Better Alignment, Attenuated Bias, and Improved Interpretability
Sunipa Dev
85
1
0
25 Nov 2020
Argument from Old Man's View: Assessing Social Bias in Argumentation
Argument from Old Man's View: Assessing Social Bias in Argumentation
Maximilian Spliethover
Henning Wachsmuth
54
20
0
24 Nov 2020
Debiasing Convolutional Neural Networks via Meta Orthogonalization
Debiasing Convolutional Neural Networks via Meta Orthogonalization
Kurtis Evan David
Qiang Liu
Ruth C. Fong
FaML
41
3
0
15 Nov 2020
Situated Data, Situated Systems: A Methodology to Engage with Power
  Relations in Natural Language Processing Research
Situated Data, Situated Systems: A Methodology to Engage with Power Relations in Natural Language Processing Research
Lucy Havens
Melissa Mhairi Terras
Benjamin Bach
Beatrice Alex
87
22
0
11 Nov 2020
Underspecification Presents Challenges for Credibility in Modern Machine
  Learning
Underspecification Presents Challenges for Credibility in Modern Machine Learning
Alexander DÁmour
Katherine A. Heller
D. Moldovan
Ben Adlam
B. Alipanahi
...
Kellie Webster
Steve Yadlowsky
T. Yun
Xiaohua Zhai
D. Sculley
OffRL
180
688
0
06 Nov 2020
Semantic and Relational Spaces in Science of Science: Deep Learning
  Models for Article Vectorisation
Semantic and Relational Spaces in Science of Science: Deep Learning Models for Article Vectorisation
Diego Kozlowski
Jennifer Dusdal
Jun Pang
A. Zilian
20
17
0
05 Nov 2020
Investigating Societal Biases in a Poetry Composition System
Investigating Societal Biases in a Poetry Composition System
Emily Sheng
David C. Uthus
83
53
0
05 Nov 2020
"Thy algorithm shalt not bear false witness": An Evaluation of
  Multiclass Debiasing Methods on Word Embeddings
"Thy algorithm shalt not bear false witness": An Evaluation of Multiclass Debiasing Methods on Word Embeddings
Thalea Schlender
Gerasimos Spanakis
34
3
0
30 Oct 2020
Image Representations Learned With Unsupervised Pre-Training Contain
  Human-like Biases
Image Representations Learned With Unsupervised Pre-Training Contain Human-like Biases
Ryan Steed
Aylin Caliskan
SSL
102
162
0
28 Oct 2020
Towards Ethics by Design in Online Abusive Content Detection
Towards Ethics by Design in Online Abusive Content Detection
S. Kiritchenko
I. Nejadgholi
79
13
0
28 Oct 2020
Unmasking Contextual Stereotypes: Measuring and Mitigating BERT's Gender
  Bias
Unmasking Contextual Stereotypes: Measuring and Mitigating BERT's Gender Bias
Marion Bartl
Malvina Nissim
Albert Gatt
91
125
0
27 Oct 2020
Discovering and Interpreting Biased Concepts in Online Communities
Discovering and Interpreting Biased Concepts in Online Communities
Xavier Ferrer-Aran
Tom van Nuenen
Natalia Criado
Jose Such
38
2
0
27 Oct 2020
Fair Embedding Engine: A Library for Analyzing and Mitigating Gender
  Bias in Word Embeddings
Fair Embedding Engine: A Library for Analyzing and Mitigating Gender Bias in Word Embeddings
Vaibhav Kumar
Tenzin Singhay Bhotia
Vaibhav Kumar
FaML
35
2
0
25 Oct 2020
On Transferability of Bias Mitigation Effects in Language Model
  Fine-Tuning
On Transferability of Bias Mitigation Effects in Language Model Fine-Tuning
Xisen Jin
Francesco Barbieri
Brendan Kennedy
Aida Mostafazadeh Davani
Leonardo Neves
Xiang Ren
92
5
0
24 Oct 2020
Rethinking embedding coupling in pre-trained language models
Rethinking embedding coupling in pre-trained language models
Hyung Won Chung
Thibault Févry
Henry Tsai
Melvin Johnson
Sebastian Ruder
180
143
0
24 Oct 2020
Fairness in Streaming Submodular Maximization: Algorithms and Hardness
Fairness in Streaming Submodular Maximization: Algorithms and Hardness
Marwa El Halabi
Slobodan Mitrović
A. Norouzi-Fard
Jakab Tardos
Jakub Tarnawski
53
49
0
14 Oct 2020
Explainability for fair machine learning
Explainability for fair machine learning
T. Begley
Tobias Schwedes
Christopher Frye
Ilya Feige
FaMLFedML
101
47
0
14 Oct 2020
Measuring and Reducing Gendered Correlations in Pre-trained Models
Measuring and Reducing Gendered Correlations in Pre-trained Models
Kellie Webster
Xuezhi Wang
Ian Tenney
Alex Beutel
Emily Pitler
Ellie Pavlick
Jilin Chen
Ed Chi
Slav Petrov
FaML
105
260
0
12 Oct 2020
Robustness and Reliability of Gender Bias Assessment in Word Embeddings:
  The Role of Base Pairs
Robustness and Reliability of Gender Bias Assessment in Word Embeddings: The Role of Base Pairs
Haiyang Zhang
Alison Sneyd
Mark Stevenson
41
14
0
06 Oct 2020
Astraea: Grammar-based Fairness Testing
Astraea: Grammar-based Fairness Testing
E. Soremekun
Sakshi Udeshi
Sudipta Chattopadhyay
154
31
0
06 Oct 2020
We Don't Speak the Same Language: Interpreting Polarization through
  Machine Translation
We Don't Speak the Same Language: Interpreting Polarization through Machine Translation
Ashiqur R. KhudaBukhsh
Rupak Sarkar
M. Kamlet
Tom Michael Mitchell
50
51
0
05 Oct 2020
Fairness in Machine Learning: A Survey
Fairness in Machine Learning: A Survey
Simon Caton
C. Haas
FaML
116
656
0
04 Oct 2020
Quantifying social organization and political polarization in online
  platforms
Quantifying social organization and political polarization in online platforms
Isaac Waller
Ashton Anderson
85
140
0
01 Oct 2020
CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked
  Language Models
CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models
Nikita Nangia
Clara Vania
Rasika Bhalerao
Samuel R. Bowman
161
691
0
30 Sep 2020
Previous
123...10116789
Next