ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2207.04546
  4. Cited By
FairDistillation: Mitigating Stereotyping in Language Models
v1v2 (latest)

FairDistillation: Mitigating Stereotyping in Language Models

10 July 2022
Pieter Delobelle
Bettina Berendt
ArXiv (abs)PDFHTMLGithub

Papers citing "FairDistillation: Mitigating Stereotyping in Language Models"

8 / 8 papers shown
Bias in Large Language Models: Origin, Evaluation, and Mitigation
Yufei Guo
Muzhe Guo
Juntao Su
Zhou Yang
Mengqiu Zhu
Hongfei Li
Mengyang Qiu
Shuo Shuo Liu
AILaw
405
97
0
16 Nov 2024
Promoting Equality in Large Language Models: Identifying and Mitigating
  the Implicit Bias based on Bayesian Theory
Promoting Equality in Large Language Models: Identifying and Mitigating the Implicit Bias based on Bayesian Theory
Yongxin Deng
Xihe Qiu
Jue Chen
Jing Pan
Chen Jue
Zhijun Fang
Yinghui Xu
Wei Chu
Yuan Qi
263
4
0
20 Aug 2024
Deconstructing The Ethics of Large Language Models from Long-standing
  Issues to New-emerging Dilemmas
Deconstructing The Ethics of Large Language Models from Long-standing Issues to New-emerging Dilemmas
Chengyuan Deng
Yiqun Duan
Xin Jin
Heng Chang
Yijun Tian
...
Kuofeng Gao
Sihong He
Jun Zhuang
Lu Cheng
Haohan Wang
AILaw
354
28
0
08 Jun 2024
Beyond Behaviorist Representational Harms: A Plan for Measurement and
  Mitigation
Beyond Behaviorist Representational Harms: A Plan for Measurement and MitigationConference on Fairness, Accountability and Transparency (FAccT), 2024
Jennifer Chien
David Danks
367
31
0
25 Jan 2024
An investigation of structures responsible for gender bias in BERT and
  DistilBERT
An investigation of structures responsible for gender bias in BERT and DistilBERTInternational Symposium on Intelligent Data Analysis (IDA), 2024
Thibaud Leteno
Antoine Gourru
Charlotte Laclau
Christophe Gravier
260
8
0
12 Jan 2024
Tackling Bias in Pre-trained Language Models: Current Trends and
  Under-represented Societies
Tackling Bias in Pre-trained Language Models: Current Trends and Under-represented Societies
Vithya Yogarajan
Gillian Dobbie
Te Taka Keegan
R. Neuwirth
ALM
424
18
0
03 Dec 2023
Bias and Fairness in Large Language Models: A Survey
Bias and Fairness in Large Language Models: A SurveyComputational Linguistics (CL), 2023
Isabel O. Gallegos
Ryan Rossi
Joe Barrow
Md Mehrab Tanjim
Sungchul Kim
Franck Dernoncourt
Tong Yu
Ruiyi Zhang
Nesreen Ahmed
AILaw
489
1,011
0
02 Sep 2023
A Survey on Fairness in Large Language Models
A Survey on Fairness in Large Language Models
Yingji Li
Mengnan Du
Rui Song
Xin Wang
Ying Wang
ALM
476
110
0
20 Aug 2023
1
Page 1 of 1