ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.15377
  4. Cited By
Uncovering and Quantifying Social Biases in Code Generation

Uncovering and Quantifying Social Biases in Code Generation

24 May 2023
Y. Liu
Xiaokang Chen
Yan Gao
Zhe Su
Fengji Zhang
Daoguang Zan
Jian-Guang Lou
Pin-Yu Chen
Tsung-Yi Ho
ArXivPDFHTML

Papers citing "Uncovering and Quantifying Social Biases in Code Generation"

17 / 17 papers shown
Title
Code Red! On the Harmfulness of Applying Off-the-shelf Large Language Models to Programming Tasks
Code Red! On the Harmfulness of Applying Off-the-shelf Large Language Models to Programming Tasks
Ali Al-Kaswan
Sebastian Deatc
Begüm Koç
A. van Deursen
M. Izadi
AAML
33
0
0
02 Apr 2025
LLMs Love Python: A Study of LLMs' Bias for Programming Languages and Libraries
LLMs Love Python: A Study of LLMs' Bias for Programming Languages and Libraries
Lukas Twist
Jie M. Zhang
Mark Harman
Don Syme
Joost Noppen
Detlef Nauck
39
0
0
21 Mar 2025
Mapping the Trust Terrain: LLMs in Software Engineering -- Insights and Perspectives
Mapping the Trust Terrain: LLMs in Software Engineering -- Insights and Perspectives
Dipin Khati
Yijin Liu
David Nader-Palacio
Yixuan Zhang
Denys Poshyvanyk
46
0
0
18 Mar 2025
Large Language Models for Code Generation: A Comprehensive Survey of Challenges, Techniques, Evaluation, and Applications
Large Language Models for Code Generation: A Comprehensive Survey of Challenges, Techniques, Evaluation, and Applications
Nam Huynh
Beiyu Lin
LM&MA
72
16
0
03 Mar 2025
Mastering the Craft of Data Synthesis for CodeLLMs
Mastering the Craft of Data Synthesis for CodeLLMs
Meng Chen
Philip Arthur
Qianyu Feng
Cong Duy Vu Hoang
Yu-Heng Hong
...
Mark Johnson
K. K.
Don Dharmasiri
Long Duong
Yuan-Fang Li
SyDa
46
1
0
16 Oct 2024
Mitigating Gender Bias in Code Large Language Models via Model Editing
Mitigating Gender Bias in Code Large Language Models via Model Editing
Z. Qin
Haochuan Wang
Zecheng Wang
Deyuan Liu
Cunhang Fan
Zhao Lv
Zhiying Tu
Dianhui Chu
Dianbo Sui
KELM
21
0
0
10 Oct 2024
Trustworthiness in Retrieval-Augmented Generation Systems: A Survey
Trustworthiness in Retrieval-Augmented Generation Systems: A Survey
Yujia Zhou
Yan Liu
Xiaoxi Li
Jiajie Jin
Hongjin Qian
Zheng Liu
Chaozhuo Li
Zhicheng Dou
Tsung-Yi Ho
Philip S. Yu
3DV
RALM
43
22
0
16 Sep 2024
Improving Long Text Understanding with Knowledge Distilled from
  Summarization Model
Improving Long Text Understanding with Knowledge Distilled from Summarization Model
Yan Liu
Yazheng Yang
Xiaokang Chen
VLM
RALM
19
1
0
08 May 2024
Bias Testing and Mitigation in LLM-based Code Generation
Bias Testing and Mitigation in LLM-based Code Generation
Dong Huang
Qingwen Bu
Jie M. Zhang
Xiaofei Xie
Junjie Chen
Heming Cui
33
20
0
03 Sep 2023
Uncovering and Categorizing Social Biases in Text-to-SQL
Uncovering and Categorizing Social Biases in Text-to-SQL
Y. Liu
Yan Gao
Zhe Su
Xiaokang Chen
Elliott Ash
Jian-Guang Lou
38
6
0
25 May 2023
Gender Bias in Meta-Embeddings
Gender Bias in Meta-Embeddings
Masahiro Kaneko
Danushka Bollegala
Naoaki Okazaki
26
6
0
19 May 2022
BBQ: A Hand-Built Bias Benchmark for Question Answering
BBQ: A Hand-Built Bias Benchmark for Question Answering
Alicia Parrish
Angelica Chen
Nikita Nangia
Vishakh Padmakumar
Jason Phang
Jana Thompson
Phu Mon Htut
Sam Bowman
205
364
0
15 Oct 2021
Toward Annotator Group Bias in Crowdsourcing
Toward Annotator Group Bias in Crowdsourcing
Haochen Liu
J. Thekinen
Sinem Mollaoglu
Da Tang
Ji Yang
Youlong Cheng
Hui Liu
Jiliang Tang
33
16
0
08 Oct 2021
Trustworthy AI: A Computational Perspective
Trustworthy AI: A Computational Perspective
Haochen Liu
Yiqi Wang
Wenqi Fan
Xiaorui Liu
Yaxin Li
Shaili Jain
Yunhao Liu
Anil K. Jain
Jiliang Tang
FaML
90
193
0
12 Jul 2021
Privacy and Robustness in Federated Learning: Attacks and Defenses
Privacy and Robustness in Federated Learning: Attacks and Defenses
Lingjuan Lyu
Han Yu
Xingjun Ma
Chen Chen
Lichao Sun
Jun Zhao
Qiang Yang
Philip S. Yu
FedML
167
244
0
07 Dec 2020
The Woman Worked as a Babysitter: On Biases in Language Generation
The Woman Worked as a Babysitter: On Biases in Language Generation
Emily Sheng
Kai-Wei Chang
Premkumar Natarajan
Nanyun Peng
198
607
0
03 Sep 2019
A Survey on Bias and Fairness in Machine Learning
A Survey on Bias and Fairness in Machine Learning
Ninareh Mehrabi
Fred Morstatter
N. Saxena
Kristina Lerman
Aram Galstyan
SyDa
FaML
286
4,143
0
23 Aug 2019
1