Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2406.13925
Cited By
GenderAlign: An Alignment Dataset for Mitigating Gender Bias in Large Language Models
20 June 2024
Tao Zhang
Ziqian Zeng
Yuxiang Xiao
Huiping Zhuang
Cen Chen
James R. Foulds
Shimei Pan
CVBM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"GenderAlign: An Alignment Dataset for Mitigating Gender Bias in Large Language Models"
4 / 4 papers shown
Title
Bridging Today and the Future of Humanity: AI Safety in 2024 and Beyond
Shanshan Han
64
1
0
09 Oct 2024
CORGI-PM: A Chinese Corpus For Gender Bias Probing and Mitigation
Ge Zhang
Yizhi Li
Yaoyao Wu
Linyuan Zhang
Chenghua Lin
Jiayi Geng
Shi Wang
Jie Fu
27
10
0
01 Jan 2023
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
BBQ: A Hand-Built Bias Benchmark for Question Answering
Alicia Parrish
Angelica Chen
Nikita Nangia
Vishakh Padmakumar
Jason Phang
Jana Thompson
Phu Mon Htut
Sam Bowman
205
364
0
15 Oct 2021
1