ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1804.09301
  4. Cited By
Gender Bias in Coreference Resolution

Gender Bias in Coreference Resolution

25 April 2018
Rachel Rudinger
Jason Naradowsky
Brian Leonard
Benjamin Van Durme
ArXiv (abs)PDFHTML

Papers citing "Gender Bias in Coreference Resolution"

50 / 229 papers shown
Title
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLMALM
913
13,249
0
04 Mar 2022
Screening Gender Transfer in Neural Machine Translation
Screening Gender Transfer in Neural Machine Translation
Guillaume Wisniewski
Lichao Zhu
Nicolas Bailler
François Yvon
98
6
0
25 Feb 2022
Impact of Pretraining Term Frequencies on Few-Shot Reasoning
Impact of Pretraining Term Frequencies on Few-Shot Reasoning
Yasaman Razeghi
Robert L Logan IV
Matt Gardner
Sameer Singh
ReLMLRM
108
157
0
15 Feb 2022
Text and Code Embeddings by Contrastive Pre-Training
Text and Code Embeddings by Contrastive Pre-Training
Arvind Neelakantan
Tao Xu
Raul Puri
Alec Radford
Jesse Michael Han
...
Tabarak Khan
Toki Sherbakov
Joanne Jang
Peter Welinder
Lilian Weng
SSLAI4TS
392
444
0
24 Jan 2022
A Survey on Gender Bias in Natural Language Processing
A Survey on Gender Bias in Natural Language Processing
Karolina Stañczak
Isabelle Augenstein
88
117
0
28 Dec 2021
Measure and Improve Robustness in NLP Models: A Survey
Measure and Improve Robustness in NLP Models: A Survey
Xuezhi Wang
Haohan Wang
Diyi Yang
293
139
0
15 Dec 2021
Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in
  Pretrained Language Models
Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models
Pieter Delobelle
E. Tokpo
T. Calders
Bettina Berendt
71
25
0
14 Dec 2021
GLaM: Efficient Scaling of Language Models with Mixture-of-Experts
GLaM: Efficient Scaling of Language Models with Mixture-of-Experts
Nan Du
Yanping Huang
Andrew M. Dai
Simon Tong
Dmitry Lepikhin
...
Kun Zhang
Quoc V. Le
Yonghui Wu
Zhiwen Chen
Claire Cui
ALMMoE
254
831
0
13 Dec 2021
Probing Linguistic Information For Logical Inference In Pre-trained
  Language Models
Probing Linguistic Information For Logical Inference In Pre-trained Language Models
Zeming Chen
Qiyue Gao
68
9
0
03 Dec 2021
CO-STAR: Conceptualisation of Stereotypes for Analysis and Reasoning
CO-STAR: Conceptualisation of Stereotypes for Analysis and Reasoning
Teyun Kwon
Anandha Gopalan
79
2
0
01 Dec 2021
A Systematic Investigation of Commonsense Knowledge in Large Language
  Models
A Systematic Investigation of Commonsense Knowledge in Large Language Models
Xiang Lorraine Li
A. Kuncoro
Jordan Hoffmann
Cyprien de Masson dÁutume
Phil Blunsom
Aida Nematzadeh
LRM
101
59
0
31 Oct 2021
The Dangers of Underclaiming: Reasons for Caution When Reporting How NLP
  Systems Fail
The Dangers of Underclaiming: Reasons for Caution When Reporting How NLP Systems Fail
Sam Bowman
OffRL
115
45
0
15 Oct 2021
BBQ: A Hand-Built Bias Benchmark for Question Answering
BBQ: A Hand-Built Bias Benchmark for Question Answering
Alicia Parrish
Angelica Chen
Nikita Nangia
Vishakh Padmakumar
Jason Phang
Jana Thompson
Phu Mon Htut
Sam Bowman
276
426
0
15 Oct 2021
Faithful Target Attribute Prediction in Neural Machine Translation
Faithful Target Attribute Prediction in Neural Machine Translation
Xing Niu
Georgiana Dinu
Prashant Mathur
Anna Currey
58
4
0
24 Sep 2021
Cross-lingual Transfer of Monolingual Models
Cross-lingual Transfer of Monolingual Models
Evangelia Gogoulou
Ariel Ekgren
T. Isbister
Magnus Sahlgren
88
18
0
15 Sep 2021
Towards Zero-shot Commonsense Reasoning with Self-supervised Refinement
  of Language Models
Towards Zero-shot Commonsense Reasoning with Self-supervised Refinement of Language Models
T. Klein
Moin Nabi
ReLMLRM
91
9
0
10 Sep 2021
Collecting a Large-Scale Gender Bias Dataset for Coreference Resolution
  and Machine Translation
Collecting a Large-Scale Gender Bias Dataset for Coreference Resolution and Machine Translation
Shahar Levy
Koren Lazar
Gabriel Stanovsky
72
70
0
08 Sep 2021
Sustainable Modular Debiasing of Language Models
Sustainable Modular Debiasing of Language Models
Anne Lauscher
Tobias Lüken
Goran Glavaš
125
124
0
08 Sep 2021
Hi, my name is Martha: Using names to measure and mitigate bias in
  generative dialogue models
Hi, my name is Martha: Using names to measure and mitigate bias in generative dialogue models
Eric Michael Smith
Adina Williams
120
28
0
07 Sep 2021
Enhancing Natural Language Representation with Large-Scale Out-of-Domain
  Commonsense
Enhancing Natural Language Representation with Large-Scale Out-of-Domain Commonsense
Wanyun Cui
Xingran Chen
90
6
0
06 Sep 2021
Why and How Governments Should Monitor AI Development
Why and How Governments Should Monitor AI Development
Jess Whittlestone
Jack Clark
47
31
0
28 Aug 2021
Harms of Gender Exclusivity and Challenges in Non-Binary Representation
  in Language Technologies
Harms of Gender Exclusivity and Challenges in Non-Binary Representation in Language Technologies
Sunipa Dev
Masoud Monajatipoor
Anaelia Ovalle
Arjun Subramonian
J. M. Phillips
Kai-Wei Chang
135
176
0
27 Aug 2021
Social Norm Bias: Residual Harms of Fairness-Aware Algorithms
Social Norm Bias: Residual Harms of Fairness-Aware Algorithms
Myra Cheng
Maria De-Arteaga
Lester W. Mackey
Adam Tauman Kalai
FaML
91
9
0
25 Aug 2021
On Measures of Biases and Harms in NLP
On Measures of Biases and Harms in NLP
Sunipa Dev
Emily Sheng
Jieyu Zhao
Aubrie Amstutz
Jiao Sun
...
M. Sanseverino
Jiin Kim
Akihiro Nishi
Nanyun Peng
Kai-Wei Chang
78
88
0
07 Aug 2021
Intersectional Bias in Causal Language Models
Intersectional Bias in Causal Language Models
Liam Magee
Lida Ghahremanlou
K. Soldatić
S. Robertson
219
33
0
16 Jul 2021
The MultiBERTs: BERT Reproductions for Robustness Analysis
The MultiBERTs: BERT Reproductions for Robustness Analysis
Thibault Sellam
Steve Yadlowsky
Jason W. Wei
Naomi Saphra
Alexander DÁmour
...
Iulia Turc
Jacob Eisenstein
Dipanjan Das
Ian Tenney
Ellie Pavlick
109
95
0
30 Jun 2021
Quantifying Social Biases in NLP: A Generalization and Empirical
  Comparison of Extrinsic Fairness Metrics
Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics
Paula Czarnowska
Yogarshi Vyas
Kashif Shah
85
112
0
28 Jun 2021
Prompting Contrastive Explanations for Commonsense Reasoning Tasks
Prompting Contrastive Explanations for Commonsense Reasoning Tasks
Bhargavi Paranjape
Julian Michael
Marjan Ghazvininejad
Luke Zettlemoyer
Hannaneh Hajishirzi
ReLMLRM
74
68
0
12 Jun 2021
Dynaboard: An Evaluation-As-A-Service Platform for Holistic
  Next-Generation Benchmarking
Dynaboard: An Evaluation-As-A-Service Platform for Holistic Next-Generation Benchmarking
Zhiyi Ma
Kawin Ethayarajh
Tristan Thrush
Somya Jain
Ledell Yu Wu
Robin Jia
Christopher Potts
Adina Williams
Douwe Kiela
ELM
115
59
0
21 May 2021
How Reliable are Model Diagnostics?
How Reliable are Model Diagnostics?
V. Aribandi
Yi Tay
Donald Metzler
103
19
0
12 May 2021
Evaluating Gender Bias in Natural Language Inference
Evaluating Gender Bias in Natural Language Inference
Shanya Sharma
Manan Dey
Koustuv Sinha
81
41
0
12 May 2021
Adapting Coreference Resolution for Processing Violent Death Narratives
Adapting Coreference Resolution for Processing Violent Death Narratives
Ankith Uppunda
S. Cochran
J. Foster
Alina Arseniev-Koehler
V. Mays
Kai-Wei Chang
58
8
0
30 Apr 2021
Revealing Persona Biases in Dialogue Systems
Revealing Persona Biases in Dialogue Systems
Emily Sheng
Josh Arnold
Zhou Yu
Kai-Wei Chang
Nanyun Peng
85
39
0
18 Apr 2021
Worst of Both Worlds: Biases Compound in Pre-trained Vision-and-Language
  Models
Worst of Both Worlds: Biases Compound in Pre-trained Vision-and-Language Models
Tejas Srinivasan
Yonatan Bisk
VLM
83
56
0
18 Apr 2021
Competency Problems: On Finding and Removing Artifacts in Language Data
Competency Problems: On Finding and Removing Artifacts in Language Data
Matt Gardner
William Merrill
Jesse Dodge
Matthew E. Peters
Alexis Ross
Sameer Singh
Noah A. Smith
242
111
0
17 Apr 2021
Improving Gender Translation Accuracy with Filtered Self-Training
Improving Gender Translation Accuracy with Filtered Self-Training
Prafulla Kumar Choubey
Anna Currey
Prashant Mathur
Georgiana Dinu
32
10
0
15 Apr 2021
First the worst: Finding better gender translations during beam search
First the worst: Finding better gender translations during beam search
D. Saunders
Rosie Sallis
Bill Byrne
56
28
0
15 Apr 2021
Gender Bias in Machine Translation
Gender Bias in Machine Translation
Beatrice Savoldi
Marco Gaido
L. Bentivogli
Matteo Negri
Marco Turchi
195
209
0
13 Apr 2021
Dynabench: Rethinking Benchmarking in NLP
Dynabench: Rethinking Benchmarking in NLP
Douwe Kiela
Max Bartolo
Yixin Nie
Divyansh Kaushik
Atticus Geiger
...
Pontus Stenetorp
Robin Jia
Joey Tianyi Zhou
Christopher Potts
Adina Williams
218
410
0
07 Apr 2021
What Will it Take to Fix Benchmarking in Natural Language Understanding?
What Will it Take to Fix Benchmarking in Natural Language Understanding?
Samuel R. Bowman
George E. Dahl
ELMALM
76
164
0
05 Apr 2021
UNICORN on RAINBOW: A Universal Commonsense Reasoning Model on a New
  Multitask Benchmark
UNICORN on RAINBOW: A Universal Commonsense Reasoning Model on a New Multitask Benchmark
Nicholas Lourie
Ronan Le Bras
Chandra Bhagavatula
Yejin Choi
LRM
102
140
0
24 Mar 2021
Gender and Racial Fairness in Depression Research using Social Media
Gender and Racial Fairness in Depression Research using Social Media
Carlos Alejandro Aguirre
Keith Harrigian
Mark Dredze
122
38
0
18 Mar 2021
Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based
  Bias in NLP
Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP
Timo Schick
Sahana Udupa
Hinrich Schütze
317
388
0
28 Feb 2021
Stereotype and Skew: Quantifying Gender Bias in Pre-trained and
  Fine-tuned Language Models
Stereotype and Skew: Quantifying Gender Bias in Pre-trained and Fine-tuned Language Models
Daniel de Vassimon Manela
D. Errington
Thomas Fisher
B. V. Breugel
Pasquale Minervini
54
96
0
24 Jan 2021
Dictionary-based Debiasing of Pre-trained Word Embeddings
Dictionary-based Debiasing of Pre-trained Word Embeddings
Masahiro Kaneko
Danushka Bollegala
FaML
97
38
0
23 Jan 2021
Debiasing Pre-trained Contextualised Embeddings
Debiasing Pre-trained Contextualised Embeddings
Masahiro Kaneko
Danushka Bollegala
260
143
0
23 Jan 2021
Robustness Gym: Unifying the NLP Evaluation Landscape
Robustness Gym: Unifying the NLP Evaluation Landscape
Karan Goel
Nazneen Rajani
Jesse Vig
Samson Tan
Jason M. Wu
Stephan Zheng
Caiming Xiong
Joey Tianyi Zhou
Christopher Ré
AAMLOffRLOOD
194
140
0
13 Jan 2021
DynaSent: A Dynamic Benchmark for Sentiment Analysis
DynaSent: A Dynamic Benchmark for Sentiment Analysis
Christopher Potts
Zhengxuan Wu
Atticus Geiger
Douwe Kiela
294
80
0
30 Dec 2020
Gender Bias in Multilingual Neural Machine Translation: The Architecture
  Matters
Gender Bias in Multilingual Neural Machine Translation: The Architecture Matters
Marta R. Costa-jussá
Carlos Escolano
Christine Basta
Javier Ferrando
Roser Batlle-Roca
Ksenia Kharitonova
74
18
0
24 Dec 2020
How Can We Know When Language Models Know? On the Calibration of
  Language Models for Question Answering
How Can We Know When Language Models Know? On the Calibration of Language Models for Question Answering
Zhengbao Jiang
Jun Araki
Haibo Ding
Graham Neubig
UQCV
65
439
0
02 Dec 2020
Previous
12345
Next