Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2004.03012
Cited By
"You are grounded!": Latent Name Artifacts in Pre-trained Language Models
6 April 2020
Vered Shwartz
Rachel Rudinger
Oyvind Tafjord
Re-assign community
ArXiv
PDF
HTML
Papers citing
""You are grounded!": Latent Name Artifacts in Pre-trained Language Models"
14 / 14 papers shown
Title
On the Influence of Gender and Race in Romantic Relationship Prediction from Large Language Models
Abhilasha Sancheti
Haozhe An
Rachel Rudinger
34
0
0
05 Oct 2024
WinoPron: Revisiting English Winogender Schemas for Consistency, Coverage, and Grammatical Case
Vagrant Gautam
Julius Steuer
Eileen Bingert
Ray Johns
Anne Lauscher
Dietrich Klakow
48
3
0
09 Sep 2024
Do Large Language Models Discriminate in Hiring Decisions on the Basis of Race, Ethnicity, and Gender?
Haozhe An
Christabel Acquaye
Colin Wang
Zongxia Li
Rachel Rudinger
36
12
0
15 Jun 2024
Reducing Sensitivity on Speaker Names for Text Generation from Dialogues
Qi Jia
Haifeng Tang
Kenny Q. Zhu
16
2
0
23 May 2023
Revisiting the Architectures like Pointer Networks to Efficiently Improve the Next Word Distribution, Summarization Factuality, and Beyond
Haw-Shiuan Chang
Zonghai Yao
Alolika Gon
Hong-ye Yu
Andrew McCallum
43
10
0
20 May 2023
Counter-GAP: Counterfactual Bias Evaluation through Gendered Ambiguous Pronouns
Zhongbin Xie
Vid Kocijan
Thomas Lukasiewicz
Oana-Maria Camburu
8
2
0
11 Feb 2023
SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense Reasoning Models
Haozhe An
Zongxia Li
Jieyu Zhao
Rachel Rudinger
19
25
0
13 Oct 2022
Multilingual Coreference Resolution in Multiparty Dialogue
Boyuan Zheng
Patrick Xia
M. Yarmohammadi
Benjamin Van Durme
50
3
0
02 Aug 2022
Richer Countries and Richer Representations
Kaitlyn Zhou
Kawin Ethayarajh
Dan Jurafsky
38
9
0
10 May 2022
Identifying and Measuring Token-Level Sentiment Bias in Pre-trained Language Models with Prompts
Apoorv Garg
Deval Srivastava
Zhiyang Xu
Lifu Huang
11
5
0
15 Apr 2022
Does Entity Abstraction Help Generative Transformers Reason?
Nicolas Angelard-Gontier
Siva Reddy
C. Pal
19
5
0
05 Jan 2022
On Measures of Biases and Harms in NLP
Sunipa Dev
Emily Sheng
Jieyu Zhao
Aubrie Amstutz
Jiao Sun
...
M. Sanseverino
Jiin Kim
Akihiro Nishi
Nanyun Peng
Kai-Wei Chang
27
80
0
07 Aug 2021
The Woman Worked as a Babysitter: On Biases in Language Generation
Emily Sheng
Kai-Wei Chang
Premkumar Natarajan
Nanyun Peng
211
616
0
03 Sep 2019
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
415
2,584
0
03 Sep 2019
1