Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2301.04347
Cited By
Counteracts: Testing Stereotypical Representation in Pre-trained Language Models
11 January 2023
Damin Zhang
Julia Taylor Rayz
Romila Pradhan
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Counteracts: Testing Stereotypical Representation in Pre-trained Language Models"
5 / 5 papers shown
Title
Sorting through the noise: Testing robustness of information processing in pre-trained language models
Lalchand Pandia
Allyson Ettinger
36
37
0
25 Sep 2021
Measuring and Improving Consistency in Pretrained Language Models
Yanai Elazar
Nora Kassner
Shauli Ravfogel
Abhilasha Ravichander
Eduard H. Hovy
Hinrich Schütze
Yoav Goldberg
HILM
258
346
0
01 Feb 2021
Analyzing Commonsense Emergence in Few-shot Knowledge Models
Jeff Da
Ronan Le Bras
Ximing Lu
Yejin Choi
Antoine Bosselut
AI4MH
KELM
64
40
0
01 Jan 2021
Spying on your neighbors: Fine-grained probing of contextual embeddings for information about surrounding words
Josef Klafka
Allyson Ettinger
43
42
0
04 May 2020
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
410
2,584
0
03 Sep 2019
1