ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2301.04347
  4. Cited By
Counteracts: Testing Stereotypical Representation in Pre-trained
  Language Models

Counteracts: Testing Stereotypical Representation in Pre-trained Language Models

11 January 2023
Damin Zhang
Julia Taylor Rayz
Romila Pradhan
ArXivPDFHTML

Papers citing "Counteracts: Testing Stereotypical Representation in Pre-trained Language Models"

5 / 5 papers shown
Title
Sorting through the noise: Testing robustness of information processing
  in pre-trained language models
Sorting through the noise: Testing robustness of information processing in pre-trained language models
Lalchand Pandia
Allyson Ettinger
36
37
0
25 Sep 2021
Measuring and Improving Consistency in Pretrained Language Models
Measuring and Improving Consistency in Pretrained Language Models
Yanai Elazar
Nora Kassner
Shauli Ravfogel
Abhilasha Ravichander
Eduard H. Hovy
Hinrich Schütze
Yoav Goldberg
HILM
258
346
0
01 Feb 2021
Analyzing Commonsense Emergence in Few-shot Knowledge Models
Analyzing Commonsense Emergence in Few-shot Knowledge Models
Jeff Da
Ronan Le Bras
Ximing Lu
Yejin Choi
Antoine Bosselut
AI4MH
KELM
64
40
0
01 Jan 2021
Spying on your neighbors: Fine-grained probing of contextual embeddings
  for information about surrounding words
Spying on your neighbors: Fine-grained probing of contextual embeddings for information about surrounding words
Josef Klafka
Allyson Ettinger
43
42
0
04 May 2020
Language Models as Knowledge Bases?
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
410
2,584
0
03 Sep 2019
1