ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.01094
  4. Cited By
Adversarial Examples Generation for Reducing Implicit Gender Bias in
  Pre-trained Models

Adversarial Examples Generation for Reducing Implicit Gender Bias in Pre-trained Models

3 October 2021
Wenqian Ye
Fei Xu
Yaojia Huang
Cassie Huang
A. Ji
ArXivPDFHTML

Papers citing "Adversarial Examples Generation for Reducing Implicit Gender Bias in Pre-trained Models"

2 / 2 papers shown
Title
Towards an Enhanced Understanding of Bias in Pre-trained Neural Language
  Models: A Survey with Special Emphasis on Affective Bias
Towards an Enhanced Understanding of Bias in Pre-trained Neural Language Models: A Survey with Special Emphasis on Affective Bias
Anoop Kadan
Manjary P.Gangan
Deepak P
L. LajishV.
AI4CE
32
10
0
21 Apr 2022
Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation
Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation
Tianlu Wang
Xi Lin
Nazneen Rajani
Bryan McCann
Vicente Ordonez
Caimng Xiong
CVBM
157
54
0
03 May 2020
1