ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.14574
  4. Cited By
Quantifying Social Biases in NLP: A Generalization and Empirical
  Comparison of Extrinsic Fairness Metrics

Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics

28 June 2021
Paula Czarnowska
Yogarshi Vyas
Kashif Shah
ArXivPDFHTML

Papers citing "Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics"

50 / 62 papers shown
Title
Attention Pruning: Automated Fairness Repair of Language Models via Surrogate Simulated Annealing
Attention Pruning: Automated Fairness Repair of Language Models via Surrogate Simulated Annealing
Vishnu Asutosh Dasu
Md. Rafi Ur Rashid
Vipul Gupta
Saeid Tizpaz-Niari
Gang Tan
AAML
43
0
0
20 Mar 2025
Implicit Bias in LLMs: A Survey
Xinru Lin
Luyang Li
57
0
0
04 Mar 2025
Integrating LLMs with ITS: Recent Advances, Potentials, Challenges, and Future Directions
Integrating LLMs with ITS: Recent Advances, Potentials, Challenges, and Future Directions
Doaa Mahmud
Hadeel Hajmohamed
Shamma Almentheri
Shamma Alqaydi
Lameya Aldhaheri
R. A. Khalil
Nasir Saeed
AI4TS
38
5
0
08 Jan 2025
Does Differential Privacy Impact Bias in Pretrained NLP Models?
Does Differential Privacy Impact Bias in Pretrained NLP Models?
Md. Khairul Islam
Andrew Wang
Tianhao Wang
Yangfeng Ji
Judy Fox
Jieyu Zhao
AI4CE
21
0
0
24 Oct 2024
Hey GPT, Can You be More Racist? Analysis from Crowdsourced Attempts to
  Elicit Biased Content from Generative AI
Hey GPT, Can You be More Racist? Analysis from Crowdsourced Attempts to Elicit Biased Content from Generative AI
Hangzhi Guo
Pranav Narayanan Venkit
Eunchae Jang
Mukund Srinath
Wenbo Zhang
Bonam Mingole
Vipul Gupta
Kush R. Varshney
S. Shyam Sundar
A. Yadav
46
3
0
20 Oct 2024
On the Influence of Gender and Race in Romantic Relationship Prediction
  from Large Language Models
On the Influence of Gender and Race in Romantic Relationship Prediction from Large Language Models
Abhilasha Sancheti
Haozhe An
Rachel Rudinger
34
0
0
05 Oct 2024
Do Multilingual Large Language Models Mitigate Stereotype Bias?
Do Multilingual Large Language Models Mitigate Stereotype Bias?
Shangrui Nie
Michael Fromm
Charles F Welch
Rebekka Görge
Akbar Karimi
Joan Plepi
Nazia Afsan Mowmita
Nicolas Flores-Herr
Mehdi Ali
Lucie Flek
24
3
0
08 Jul 2024
AI Safety in Generative AI Large Language Models: A Survey
AI Safety in Generative AI Large Language Models: A Survey
Jaymari Chua
Yun Yvonna Li
Shiyi Yang
Chen Wang
Lina Yao
LM&MA
34
12
0
06 Jul 2024
A Study of Nationality Bias in Names and Perplexity using Off-the-Shelf
  Affect-related Tweet Classifiers
A Study of Nationality Bias in Names and Perplexity using Off-the-Shelf Affect-related Tweet Classifiers
Valentin Barriere
Sebastian Cifuentes
26
0
0
01 Jul 2024
See It from My Perspective: How Language Affects Cultural Bias in Image Understanding
See It from My Perspective: How Language Affects Cultural Bias in Image Understanding
Amith Ananthram
Elias Stengel-Eskin
Carl Vondrick
Mohit Bansal
VLM
32
7
0
17 Jun 2024
Improving Commonsense Bias Classification by Mitigating the Influence of
  Demographic Terms
Improving Commonsense Bias Classification by Mitigating the Influence of Demographic Terms
JinKyu Lee
Jihie Kim
23
0
0
11 Jun 2024
Uncovering Bias in Large Vision-Language Models at Scale with Counterfactuals
Uncovering Bias in Large Vision-Language Models at Scale with Counterfactuals
Phillip Howard
Kathleen C. Fraser
Anahita Bhiwandiwalla
S. Kiritchenko
48
9
0
30 May 2024
Understanding Position Bias Effects on Fairness in Social Multi-Document
  Summarization
Understanding Position Bias Effects on Fairness in Social Multi-Document Summarization
Olubusayo Olabisi
Ameeta Agrawal
29
2
0
03 May 2024
The Impact of Unstated Norms in Bias Analysis of Language Models
The Impact of Unstated Norms in Bias Analysis of Language Models
Farnaz Kohankhaki
D. B. Emerson
David B. Emerson
Laleh Seyyed-Kalantari
Faiza Khan Khattak
50
1
0
04 Apr 2024
Addressing Both Statistical and Causal Gender Fairness in NLP Models
Addressing Both Statistical and Causal Gender Fairness in NLP Models
Hannah Chen
Yangfeng Ji
David E. Evans
21
2
0
30 Mar 2024
From Melting Pots to Misrepresentations: Exploring Harms in Generative
  AI
From Melting Pots to Misrepresentations: Exploring Harms in Generative AI
Sanjana Gautam
Pranav Narayanan Venkit
Sourojit Ghosh
39
15
0
16 Mar 2024
Measuring Bias in a Ranked List using Term-based Representations
Measuring Bias in a Ranked List using Term-based Representations
Amin Abolghasemi
Leif Azzopardi
Arian Askari
Maarten de Rijke
Suzan Verberne
34
6
0
09 Mar 2024
Twists, Humps, and Pebbles: Multilingual Speech Recognition Models
  Exhibit Gender Performance Gaps
Twists, Humps, and Pebbles: Multilingual Speech Recognition Models Exhibit Gender Performance Gaps
Giuseppe Attanasio
Beatrice Savoldi
Dennis Fucci
Dirk Hovy
31
4
0
28 Feb 2024
A Note on Bias to Complete
A Note on Bias to Complete
Jia Xu
Mona Diab
39
2
0
18 Feb 2024
FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in
  LLMs
FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMs
S. Kadhe
Anisa Halimi
Ambrish Rawat
Nathalie Baracaldo
MU
12
7
0
12 Dec 2023
FFT: Towards Harmlessness Evaluation and Analysis for LLMs with
  Factuality, Fairness, Toxicity
FFT: Towards Harmlessness Evaluation and Analysis for LLMs with Factuality, Fairness, Toxicity
Shiyao Cui
Zhenyu Zhang
Yilong Chen
Wenyuan Zhang
Tianyun Liu
Siqi Wang
Tingwen Liu
28
13
0
30 Nov 2023
Social Bias Probing: Fairness Benchmarking for Language Models
Social Bias Probing: Fairness Benchmarking for Language Models
Marta Marchiori Manerba
Karolina Stañczak
Riccardo Guidotti
Isabelle Augenstein
25
16
0
15 Nov 2023
Step by Step to Fairness: Attributing Societal Bias in Task-oriented
  Dialogue Systems
Step by Step to Fairness: Attributing Societal Bias in Task-oriented Dialogue Systems
Hsuan Su
Rebecca Qian
Chinnadhurai Sankar
Shahin Shayandeh
Shang-Tse Chen
Hung-yi Lee
Daniel M. Bikel
26
0
0
11 Nov 2023
Do Not Harm Protected Groups in Debiasing Language Representation Models
Do Not Harm Protected Groups in Debiasing Language Representation Models
Chloe Qinyu Zhu
Rickard Stureborg
Brandon Fain
16
0
0
27 Oct 2023
Examining Temporal Bias in Abusive Language Detection
Examining Temporal Bias in Abusive Language Detection
Mali Jin
Yida Mu
Diana Maynard
Kalina Bontcheva
26
5
0
25 Sep 2023
Survey of Social Bias in Vision-Language Models
Survey of Social Bias in Vision-Language Models
Nayeon Lee
Yejin Bang
Holy Lovenia
Samuel Cahyawijaya
Wenliang Dai
Pascale Fung
VLM
36
16
0
24 Sep 2023
Investigating Subtler Biases in LLMs: Ageism, Beauty, Institutional, and
  Nationality Bias in Generative Models
Investigating Subtler Biases in LLMs: Ageism, Beauty, Institutional, and Nationality Bias in Generative Models
M. Kamruzzaman
M. M. I. Shovon
Gene Louis Kim
38
25
0
16 Sep 2023
Cultural Alignment in Large Language Models: An Explanatory Analysis
  Based on Hofstede's Cultural Dimensions
Cultural Alignment in Large Language Models: An Explanatory Analysis Based on Hofstede's Cultural Dimensions
Reem I. Masoud
Ziquan Liu
Martin Ferianc
Philip C. Treleaven
Miguel R. D. Rodrigues
19
50
0
25 Aug 2023
CALM : A Multi-task Benchmark for Comprehensive Assessment of Language
  Model Bias
CALM : A Multi-task Benchmark for Comprehensive Assessment of Language Model Bias
Vipul Gupta
Pranav Narayanan Venkit
Hugo Laurenccon
Shomir Wilson
R. Passonneau
36
12
0
24 Aug 2023
Automated Ableism: An Exploration of Explicit Disability Biases in
  Sentiment and Toxicity Analysis Models
Automated Ableism: An Exploration of Explicit Disability Biases in Sentiment and Toxicity Analysis Models
Pranav Narayanan Venkit
Mukund Srinath
Shomir Wilson
23
17
0
18 Jul 2023
WinoQueer: A Community-in-the-Loop Benchmark for Anti-LGBTQ+ Bias in
  Large Language Models
WinoQueer: A Community-in-the-Loop Benchmark for Anti-LGBTQ+ Bias in Large Language Models
Virginia K. Felkner
Ho-Chun Herbert Chang
Eugene Jang
Jonathan May
OSLM
14
30
0
26 Jun 2023
Sociodemographic Bias in Language Models: A Survey and Forward Path
Sociodemographic Bias in Language Models: A Survey and Forward Path
Vipul Gupta
Pranav Narayanan Venkit
Shomir Wilson
R. Passonneau
42
20
0
13 Jun 2023
Soft-prompt Tuning for Large Language Models to Evaluate Bias
Soft-prompt Tuning for Large Language Models to Evaluate Bias
Jacob-Junqi Tian
David B. Emerson
Sevil Zanjani Miyandoab
D. Pandya
Laleh Seyyed-Kalantari
Faiza Khan Khattak
VLM
19
10
0
07 Jun 2023
Nichelle and Nancy: The Influence of Demographic Attributes and
  Tokenization Length on First Name Biases
Nichelle and Nancy: The Influence of Demographic Attributes and Tokenization Length on First Name Biases
Haozhe An
Rachel Rudinger
16
9
0
26 May 2023
Having Beer after Prayer? Measuring Cultural Bias in Large Language
  Models
Having Beer after Prayer? Measuring Cultural Bias in Large Language Models
Tarek Naous
Michael Joseph Ryan
Alan Ritter
Wei-ping Xu
24
85
0
23 May 2023
This Prompt is Measuring <MASK>: Evaluating Bias Evaluation in Language
  Models
This Prompt is Measuring <MASK>: Evaluating Bias Evaluation in Language Models
Seraphina Goldfarb-Tarrant
Eddie L. Ungless
Esma Balkir
Su Lin Blodgett
27
9
0
22 May 2023
In the Name of Fairness: Assessing the Bias in Clinical Record
  De-identification
In the Name of Fairness: Assessing the Bias in Clinical Record De-identification
Yuxin Xiao
S. Lim
Tom Pollard
Marzyeh Ghassemi
13
12
0
18 May 2023
Comparing Biases and the Impact of Multilingual Training across Multiple
  Languages
Comparing Biases and the Impact of Multilingual Training across Multiple Languages
Sharon Levy
Neha Ann John
Ling Liu
Yogarshi Vyas
Jie Ma
Yoshinari Fujinuma
Miguel Ballesteros
Vittorio Castelli
Dan Roth
21
25
0
18 May 2023
On the Origins of Bias in NLP through the Lens of the Jim Code
On the Origins of Bias in NLP through the Lens of the Jim Code
Fatma Elsafoury
Gavin Abercrombie
28
4
0
16 May 2023
On the Independence of Association Bias and Empirical Fairness in
  Language Models
On the Independence of Association Bias and Empirical Fairness in Language Models
Laura Cabello
Anna Katrine van Zee
Anders Søgaard
24
25
0
20 Apr 2023
Language Model Behavior: A Comprehensive Survey
Language Model Behavior: A Comprehensive Survey
Tyler A. Chang
Benjamin Bergen
VLM
LRM
LM&MA
27
102
0
20 Mar 2023
Nationality Bias in Text Generation
Nationality Bias in Text Generation
Pranav Narayanan Venkit
Sanjana Gautam
Ruchi Panchanadikar
Ting-Hao 'Kenneth' Huang
Shomir Wilson
20
51
0
05 Feb 2023
A Comprehensive Study of Gender Bias in Chemical Named Entity
  Recognition Models
A Comprehensive Study of Gender Bias in Chemical Named Entity Recognition Models
Xingmeng Zhao
A. Niazi
Anthony Rios
18
2
0
24 Dec 2022
Trustworthy Social Bias Measurement
Trustworthy Social Bias Measurement
Rishi Bommasani
Percy Liang
27
10
0
20 Dec 2022
Towards Procedural Fairness: Uncovering Biases in How a Toxic Language
  Classifier Uses Sentiment Information
Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information
I. Nejadgholi
Esma Balkir
Kathleen C. Fraser
S. Kiritchenko
23
3
0
19 Oct 2022
The Tail Wagging the Dog: Dataset Construction Biases of Social Bias
  Benchmarks
The Tail Wagging the Dog: Dataset Construction Biases of Social Bias Benchmarks
Nikil Selvam
Sunipa Dev
Daniel Khashabi
Tushar Khot
Kai-Wei Chang
ALM
11
25
0
18 Oct 2022
BERTScore is Unfair: On Social Bias in Language Model-Based Metrics for
  Text Generation
BERTScore is Unfair: On Social Bias in Language Model-Based Metrics for Text Generation
Tianxiang Sun
Junliang He
Xipeng Qiu
Xuanjing Huang
22
44
0
14 Oct 2022
SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense
  Reasoning Models
SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense Reasoning Models
Haozhe An
Zongxia Li
Jieyu Zhao
Rachel Rudinger
10
25
0
13 Oct 2022
Conformalized Fairness via Quantile Regression
Conformalized Fairness via Quantile Regression
Meichen Liu
Lei Ding
Dengdeng Yu
Wulong Liu
Linglong Kong
Bei Jiang
42
9
0
05 Oct 2022
Towards WinoQueer: Developing a Benchmark for Anti-Queer Bias in Large
  Language Models
Towards WinoQueer: Developing a Benchmark for Anti-Queer Bias in Large Language Models
Virginia K. Felkner
Ho-Chun Herbert Chang
Eugene Jang
Jonathan May
OSLM
21
8
0
23 Jun 2022
12
Next