ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.14050
  4. Cited By
Language (Technology) is Power: A Critical Survey of "Bias" in NLP

Language (Technology) is Power: A Critical Survey of "Bias" in NLP

28 May 2020
Su Lin Blodgett
Solon Barocas
Hal Daumé
Hanna M. Wallach
ArXivPDFHTML

Papers citing "Language (Technology) is Power: A Critical Survey of "Bias" in NLP"

50 / 203 papers shown
Title
Zero-Shot Learners for Natural Language Understanding via a Unified
  Multiple Choice Perspective
Zero-Shot Learners for Natural Language Understanding via a Unified Multiple Choice Perspective
Ping Yang
Junjie Wang
Ruyi Gan
Xinyu Zhu
Lin Zhang
Ziwei Wu
Xinyu Gao
Jiaxing Zhang
Tetsuya Sakai
BDL
14
25
0
16 Oct 2022
The User-Aware Arabic Gender Rewriter
The User-Aware Arabic Gender Rewriter
Bashar Alhafni
Ossama Obeid
Nizar Habash
21
2
0
14 Oct 2022
COFFEE: Counterfactual Fairness for Personalized Text Generation in
  Explainable Recommendation
COFFEE: Counterfactual Fairness for Personalized Text Generation in Explainable Recommendation
Nan Wang
Qifan Wang
Yi-Chia Wang
Maziar Sanjabi
Jingzhou Liu
Hamed Firooz
Hongning Wang
Shaoliang Nie
28
6
0
14 Oct 2022
SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense
  Reasoning Models
SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense Reasoning Models
Haozhe An
Zongxia Li
Jieyu Zhao
Rachel Rudinger
16
25
0
13 Oct 2022
Back to the Future: On Potential Histories in NLP
Back to the Future: On Potential Histories in NLP
Zeerak Talat
Anne Lauscher
AI4TS
30
4
0
12 Oct 2022
Social-Group-Agnostic Word Embedding Debiasing via the Stereotype
  Content Model
Social-Group-Agnostic Word Embedding Debiasing via the Stereotype Content Model
Ali Omrani
Brendan Kennedy
M. Atari
Morteza Dehghani
24
1
0
11 Oct 2022
Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm
  Reduction
Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction
Renee Shelby
Shalaleh Rismani
Kathryn Henne
AJung Moon
Negar Rostamzadeh
...
N'Mah Yilla-Akbari
Jess Gallegos
A. Smart
Emilio Garcia
Gurleen Virk
34
188
0
11 Oct 2022
The Lifecycle of "Facts": A Survey of Social Bias in Knowledge Graphs
The Lifecycle of "Facts": A Survey of Social Bias in Knowledge Graphs
Angelie Kraft
Ricardo Usbeck
KELM
20
9
0
07 Oct 2022
A Human Rights-Based Approach to Responsible AI
A Human Rights-Based Approach to Responsible AI
Vinodkumar Prabhakaran
Margaret Mitchell
Timnit Gebru
Iason Gabriel
41
36
0
06 Oct 2022
Improving alignment of dialogue agents via targeted human judgements
Improving alignment of dialogue agents via targeted human judgements
Amelia Glaese
Nat McAleese
Maja Trkebacz
John Aslanides
Vlad Firoiu
...
John F. J. Mellor
Demis Hassabis
Koray Kavukcuoglu
Lisa Anne Hendricks
G. Irving
ALM
AAML
227
500
0
28 Sep 2022
A Review of Challenges in Machine Learning based Automated Hate Speech
  Detection
A Review of Challenges in Machine Learning based Automated Hate Speech Detection
Abhishek Velankar
H. Patil
Raviraj Joshi
32
8
0
12 Sep 2022
Lost in Translation: Reimagining the Machine Learning Life Cycle in
  Education
Lost in Translation: Reimagining the Machine Learning Life Cycle in Education
Lydia T. Liu
Serena Wang
Tolani A. Britton
Rediet Abebe
AI4Ed
19
1
0
08 Sep 2022
"Es geht um Respekt, nicht um Technologie": Erkenntnisse aus einem
  Interessensgruppen-übergreifenden Workshop zu genderfairer Sprache und
  Sprachtechnologie
"Es geht um Respekt, nicht um Technologie": Erkenntnisse aus einem Interessensgruppen-übergreifenden Workshop zu genderfairer Sprache und Sprachtechnologie
Sabrina Burtscher
Katta Spiel
Lukas Daniel Klausner
Manuel Lardelli
Dagmar Gromann
13
7
0
06 Sep 2022
Debiasing Word Embeddings with Nonlinear Geometry
Debiasing Word Embeddings with Nonlinear Geometry
Lu Cheng
Nayoung Kim
Huan Liu
16
5
0
29 Aug 2022
Using Large Language Models to Simulate Multiple Humans and Replicate
  Human Subject Studies
Using Large Language Models to Simulate Multiple Humans and Replicate Human Subject Studies
Gati Aher
RosaI. Arriaga
Adam Tauman Kalai
45
343
0
18 Aug 2022
Visual Comparison of Language Model Adaptation
Visual Comparison of Language Model Adaptation
R. Sevastjanova
E. Cakmak
Shauli Ravfogel
Ryan Cotterell
Mennatallah El-Assady
VLM
41
16
0
17 Aug 2022
Towards No.1 in CLUE Semantic Matching Challenge: Pre-trained Language
  Model Erlangshen with Propensity-Corrected Loss
Towards No.1 in CLUE Semantic Matching Challenge: Pre-trained Language Model Erlangshen with Propensity-Corrected Loss
Junjie Wang
Yuxiang Zhang
Ping Yang
Ruyi Gan
11
2
0
05 Aug 2022
AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq
  Model
AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model
Saleh Soltan
Shankar Ananthakrishnan
Jack G. M. FitzGerald
Rahul Gupta
Wael Hamza
...
Mukund Sridhar
Fabian Triefenbach
Apurv Verma
Gökhan Tür
Premkumar Natarajan
46
82
0
02 Aug 2022
On the Limitations of Sociodemographic Adaptation with Transformers
On the Limitations of Sociodemographic Adaptation with Transformers
Chia-Chien Hung
Anne Lauscher
Dirk Hovy
Simone Paolo Ponzetto
Goran Glavavs
19
0
0
01 Aug 2022
A Hazard Analysis Framework for Code Synthesis Large Language Models
A Hazard Analysis Framework for Code Synthesis Large Language Models
Heidy Khlaaf
Pamela Mishkin
Joshua Achiam
Gretchen Krueger
Miles Brundage
ELM
17
28
0
25 Jul 2022
BERTIN: Efficient Pre-Training of a Spanish Language Model using
  Perplexity Sampling
BERTIN: Efficient Pre-Training of a Spanish Language Model using Perplexity Sampling
Javier de la Rosa
E. G. Ponferrada
Paulo Villegas
Pablo González de Prado Salas
Manu Romero
María Grandury
30
95
0
14 Jul 2022
A methodology to characterize bias and harmful stereotypes in natural
  language processing in Latin America
A methodology to characterize bias and harmful stereotypes in natural language processing in Latin America
Laura Alonso Alemany
Luciana Benotti
Hernán Maina
Lucía González
Mariela Rajngewerc
...
Guido Ivetta
Alexia Halvorsen
Amanda Rojo
M. Bordone
Beatriz Busaniche
24
3
0
14 Jul 2022
FairDistillation: Mitigating Stereotyping in Language Models
FairDistillation: Mitigating Stereotyping in Language Models
Pieter Delobelle
Bettina Berendt
20
8
0
10 Jul 2022
Towards WinoQueer: Developing a Benchmark for Anti-Queer Bias in Large
  Language Models
Towards WinoQueer: Developing a Benchmark for Anti-Queer Bias in Large Language Models
Virginia K. Felkner
Ho-Chun Herbert Chang
Eugene Jang
Jonathan May
OSLM
21
8
0
23 Jun 2022
Characteristics of Harmful Text: Towards Rigorous Benchmarking of
  Language Models
Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models
Maribeth Rauh
John F. J. Mellor
J. Uesato
Po-Sen Huang
Johannes Welbl
...
Amelia Glaese
G. Irving
Iason Gabriel
William S. Isaac
Lisa Anne Hendricks
25
49
0
16 Jun 2022
Detecting Harmful Online Conversational Content towards LGBTQIA+
  Individuals
Detecting Harmful Online Conversational Content towards LGBTQIA+ Individuals
Jamell Dacon
Harry Shomer
Shaylynn Crum-Dacon
Jiliang Tang
17
8
0
15 Jun 2022
Resolving the Human Subjects Status of Machine Learning's Crowdworkers
Resolving the Human Subjects Status of Machine Learning's Crowdworkers
Divyansh Kaushik
Zachary Chase Lipton
A. London
25
2
0
08 Jun 2022
Challenges in Applying Explainability Methods to Improve the Fairness of
  NLP Models
Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models
Esma Balkir
S. Kiritchenko
I. Nejadgholi
Kathleen C. Fraser
21
36
0
08 Jun 2022
On Reinforcement Learning and Distribution Matching for Fine-Tuning
  Language Models with no Catastrophic Forgetting
On Reinforcement Learning and Distribution Matching for Fine-Tuning Language Models with no Catastrophic Forgetting
Tomasz Korbak
Hady ElSahar
Germán Kruszewski
Marc Dymetman
CLL
15
49
0
01 Jun 2022
Conditional Supervised Contrastive Learning for Fair Text Classification
Conditional Supervised Contrastive Learning for Fair Text Classification
Jianfeng Chi
Will Shand
Yaodong Yu
Kai-Wei Chang
Han Zhao
Yuan Tian
FaML
46
14
0
23 May 2022
KOLD: Korean Offensive Language Dataset
KOLD: Korean Offensive Language Dataset
Young-kuk Jeong
Juhyun Oh
Jaimeen Ahn
Jongwon Lee
Jihyung Mon
Sungjoon Park
Alice H. Oh
40
25
0
23 May 2022
"I'm sorry to hear that": Finding New Biases in Language Models with a
  Holistic Descriptor Dataset
"I'm sorry to hear that": Finding New Biases in Language Models with a Holistic Descriptor Dataset
Eric Michael Smith
Melissa Hall
Melanie Kambadur
Eleonora Presani
Adina Williams
65
129
0
18 May 2022
Deconstructing NLG Evaluation: Evaluation Practices, Assumptions, and
  Their Implications
Deconstructing NLG Evaluation: Evaluation Practices, Assumptions, and Their Implications
Kaitlyn Zhou
Su Lin Blodgett
Adam Trischler
Hal Daumé
Kaheer Suleman
Alexandra Olteanu
ELM
94
26
0
13 May 2022
Mitigating Gender Stereotypes in Hindi and Marathi
Mitigating Gender Stereotypes in Hindi and Marathi
Neeraja Kirtane
Tanvi Anand
18
8
0
12 May 2022
Counterfactually Augmented Data and Unintended Bias: The Case of Sexism
  and Hate Speech Detection
Counterfactually Augmented Data and Unintended Bias: The Case of Sexism and Hate Speech Detection
Indira Sen
Mattia Samory
Claudia Wagner
Isabelle Augenstein
24
16
0
09 May 2022
Learning Disentangled Textual Representations via Statistical Measures
  of Similarity
Learning Disentangled Textual Representations via Statistical Measures of Similarity
Pierre Colombo
Guillaume Staerman
Nathan Noiry
Pablo Piantanida
FaML
DRL
38
21
0
07 May 2022
Rethinking Fairness: An Interdisciplinary Survey of Critiques of
  Hegemonic ML Fairness Approaches
Rethinking Fairness: An Interdisciplinary Survey of Critiques of Hegemonic ML Fairness Approaches
Lindsay Weinberg
FaML
SyDa
24
58
0
06 May 2022
Handling and Presenting Harmful Text in NLP Research
Handling and Presenting Harmful Text in NLP Research
Hannah Rose Kirk
Abeba Birhane
Bertie Vidgen
Leon Derczynski
13
47
0
29 Apr 2022
Justice in Misinformation Detection Systems: An Analysis of Algorithms,
  Stakeholders, and Potential Harms
Justice in Misinformation Detection Systems: An Analysis of Algorithms, Stakeholders, and Potential Harms
Terrence Neumann
Maria De-Arteaga
S. Fazelpour
25
22
0
28 Apr 2022
How Gender Debiasing Affects Internal Model Representations, and Why It
  Matters
How Gender Debiasing Affects Internal Model Representations, and Why It Matters
Hadas Orgad
Seraphina Goldfarb-Tarrant
Yonatan Belinkov
18
18
0
14 Apr 2022
Easy Adaptation to Mitigate Gender Bias in Multilingual Text
  Classification
Easy Adaptation to Mitigate Gender Bias in Multilingual Text Classification
Xiaolei Huang
FaML
13
8
0
12 Apr 2022
A Well-Composed Text is Half Done! Composition Sampling for Diverse
  Conditional Generation
A Well-Composed Text is Half Done! Composition Sampling for Diverse Conditional Generation
Shashi Narayan
Gonccalo Simoes
Yao-Min Zhao
Joshua Maynez
Dipanjan Das
Michael Collins
Mirella Lapata
26
30
0
28 Mar 2022
Challenges and Strategies in Cross-Cultural NLP
Challenges and Strategies in Cross-Cultural NLP
Daniel Hershcovich
Stella Frank
Heather Lent
Miryam de Lhoneux
Mostafa Abdou
...
Ruixiang Cui
Constanza Fierro
Katerina Margatina
Phillip Rust
Anders Søgaard
41
162
0
18 Mar 2022
Speciesist Language and Nonhuman Animal Bias in English Masked Language
  Models
Speciesist Language and Nonhuman Animal Bias in English Masked Language Models
Masashi Takeshita
Rafal Rzepka
K. Araki
24
6
0
10 Mar 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
311
11,915
0
04 Mar 2022
Language technology practitioners as language managers: arbitrating data
  bias and predictive bias in ASR
Language technology practitioners as language managers: arbitrating data bias and predictive bias in ASR
Nina Markl
S. McNulty
22
9
0
25 Feb 2022
Handling Bias in Toxic Speech Detection: A Survey
Handling Bias in Toxic Speech Detection: A Survey
Tanmay Garg
Sarah Masud
Tharun Suresh
Tanmoy Chakraborty
9
89
0
26 Jan 2022
Whose Language Counts as High Quality? Measuring Language Ideologies in
  Text Data Selection
Whose Language Counts as High Quality? Measuring Language Ideologies in Text Data Selection
Suchin Gururangan
Dallas Card
Sarah K. Drier
E. K. Gade
Leroy Z. Wang
Zeyu Wang
Luke Zettlemoyer
Noah A. Smith
172
73
0
25 Jan 2022
Causal effect of racial bias in data and machine learning algorithms on user persuasiveness & discriminatory decision making: An Empirical Study
Kinshuk Sengupta
Praveen Ranjan Srivastava
28
6
0
22 Jan 2022
A Survey on Gender Bias in Natural Language Processing
A Survey on Gender Bias in Natural Language Processing
Karolina Stañczak
Isabelle Augenstein
28
109
0
28 Dec 2021
Previous
12345
Next