ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.13925
  4. Cited By
GenderAlign: An Alignment Dataset for Mitigating Gender Bias in Large
  Language Models

GenderAlign: An Alignment Dataset for Mitigating Gender Bias in Large Language Models

20 June 2024
Tao Zhang
Ziqian Zeng
Yuxiang Xiao
Huiping Zhuang
Cen Chen
James R. Foulds
Shimei Pan
    CVBM
ArXivPDFHTML

Papers citing "GenderAlign: An Alignment Dataset for Mitigating Gender Bias in Large Language Models"

4 / 4 papers shown
Title
Bridging Today and the Future of Humanity: AI Safety in 2024 and Beyond
Bridging Today and the Future of Humanity: AI Safety in 2024 and Beyond
Shanshan Han
64
1
0
09 Oct 2024
CORGI-PM: A Chinese Corpus For Gender Bias Probing and Mitigation
CORGI-PM: A Chinese Corpus For Gender Bias Probing and Mitigation
Ge Zhang
Yizhi Li
Yaoyao Wu
Linyuan Zhang
Chenghua Lin
Jiayi Geng
Shi Wang
Jie Fu
27
10
0
01 Jan 2023
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
BBQ: A Hand-Built Bias Benchmark for Question Answering
BBQ: A Hand-Built Bias Benchmark for Question Answering
Alicia Parrish
Angelica Chen
Nikita Nangia
Vishakh Padmakumar
Jason Phang
Jana Thompson
Phu Mon Htut
Sam Bowman
205
364
0
15 Oct 2021
1