ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.04337
  4. Cited By
Quantifying Social Biases Using Templates is Unreliable

Quantifying Social Biases Using Templates is Unreliable

9 October 2022
P. Seshadri
Pouya Pezeshkpour
Sameer Singh
ArXivPDFHTML

Papers citing "Quantifying Social Biases Using Templates is Unreliable"

10 / 10 papers shown
Title
Explicit vs. Implicit: Investigating Social Bias in Large Language Models through Self-Reflection
Explicit vs. Implicit: Investigating Social Bias in Large Language Models through Self-Reflection
Yachao Zhao
Bo Wang
Yan Wang
40
2
0
04 Jan 2025
Revealing and Reducing Gender Biases in Vision and Language Assistants (VLAs)
Revealing and Reducing Gender Biases in Vision and Language Assistants (VLAs)
Leander Girrbach
Yiran Huang
Stephan Alaniz
Trevor Darrell
Zeynep Akata
VLM
32
2
0
25 Oct 2024
WinoPron: Revisiting English Winogender Schemas for Consistency,
  Coverage, and Grammatical Case
WinoPron: Revisiting English Winogender Schemas for Consistency, Coverage, and Grammatical Case
Vagrant Gautam
Julius Steuer
Eileen Bingert
Ray Johns
Anne Lauscher
Dietrich Klakow
30
3
0
09 Sep 2024
GenderCARE: A Comprehensive Framework for Assessing and Reducing Gender Bias in Large Language Models
GenderCARE: A Comprehensive Framework for Assessing and Reducing Gender Bias in Large Language Models
Kunsheng Tang
Wenbo Zhou
Jie Zhang
Aishan Liu
Gelei Deng
Shuai Li
Peigui Qi
Weiming Zhang
Tianwei Zhang
Nenghai Yu
29
3
0
22 Aug 2024
The Unequal Opportunities of Large Language Models: Revealing
  Demographic Bias through Job Recommendations
The Unequal Opportunities of Large Language Models: Revealing Demographic Bias through Job Recommendations
A. Salinas
Parth Vipul Shah
Yuzhong Huang
Robert McCormack
Fred Morstatter
6
32
0
03 Aug 2023
This Prompt is Measuring <MASK>: Evaluating Bias Evaluation in Language
  Models
This Prompt is Measuring <MASK>: Evaluating Bias Evaluation in Language Models
Seraphina Goldfarb-Tarrant
Eddie L. Ungless
Esma Balkir
Su Lin Blodgett
14
9
0
22 May 2023
Bias Beyond English: Counterfactual Tests for Bias in Sentiment Analysis
  in Four Languages
Bias Beyond English: Counterfactual Tests for Bias in Sentiment Analysis in Four Languages
Seraphina Goldfarb-Tarrant
Adam Lopez
Roi Blanco
Diego Marcheggiani
17
13
0
19 May 2023
The Tail Wagging the Dog: Dataset Construction Biases of Social Bias
  Benchmarks
The Tail Wagging the Dog: Dataset Construction Biases of Social Bias Benchmarks
Nikil Selvam
Sunipa Dev
Daniel Khashabi
Tushar Khot
Kai-Wei Chang
ALM
4
25
0
18 Oct 2022
Robustness Gym: Unifying the NLP Evaluation Landscape
Robustness Gym: Unifying the NLP Evaluation Landscape
Karan Goel
Nazneen Rajani
Jesse Vig
Samson Tan
Jason M. Wu
Stephan Zheng
Caiming Xiong
Mohit Bansal
Christopher Ré
AAML
OffRL
OOD
130
136
0
13 Jan 2021
The Woman Worked as a Babysitter: On Biases in Language Generation
The Woman Worked as a Babysitter: On Biases in Language Generation
Emily Sheng
Kai-Wei Chang
Premkumar Natarajan
Nanyun Peng
195
607
0
03 Sep 2019
1