ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.12232
  4. Cited By
"You Gotta be a Doctor, Lin": An Investigation of Name-Based Bias of
  Large Language Models in Employment Recommendations

"You Gotta be a Doctor, Lin": An Investigation of Name-Based Bias of Large Language Models in Employment Recommendations

18 June 2024
H. Nghiem
John J. Prindle
Jieyu Zhao
Hal Daumé III
ArXiv (abs)PDFHTML

Papers citing ""You Gotta be a Doctor, Lin": An Investigation of Name-Based Bias of Large Language Models in Employment Recommendations"

11 / 11 papers shown
Surface Fairness, Deep Bias: A Comparative Study of Bias in Language Models
Surface Fairness, Deep Bias: A Comparative Study of Bias in Language Models
Aleksandra Sorokovikova
Pavel Chizhov
Iuliia Eremenko
Ivan P. Yamshchikov
320
6
0
12 Jun 2025
XToM: Exploring the Multilingual Theory of Mind for Large Language Models
XToM: Exploring the Multilingual Theory of Mind for Large Language Models
Chunkit Chan
Yauwai Yim
Hongchuan Zeng
Zhiying Zou
Xinyuan Cheng
...
Ginny Wong
Helmut Schmid
Hinrich Schütze
Simon See
Yangqiu Song
LRM
201
0
0
03 Jun 2025
Reading Between the Prompts: How Stereotypes Shape LLM's Implicit Personalization
Reading Between the Prompts: How Stereotypes Shape LLM's Implicit Personalization
Vera Neplenbroek
Arianna Bisazza
Raquel Fernández
321
2
0
22 May 2025
More Women, Same Stereotypes: Unpacking the Gender Bias Paradox in Large Language Models
More Women, Same Stereotypes: Unpacking the Gender Bias Paradox in Large Language Models
Evan Chen
Run-Jun Zhan
Yan-Bai Lin
Hung-Hsuan Chen
275
3
0
20 Mar 2025
Language Models Predict Empathy Gaps Between Social In-groups and Out-groupsNorth American Chapter of the Association for Computational Linguistics (NAACL), 2025
Yu Hou
Hal Daumé III
Rachel Rudinger
320
7
0
02 Mar 2025
More of the Same: Persistent Representational Harms Under Increased Representation
More of the Same: Persistent Representational Harms Under Increased Representation
Jennifer Mickel
Maria De-Arteaga
Leqi Liu
Kevin Tian
340
3
0
01 Mar 2025
Presumed Cultural Identity: How Names Shape LLM Responses
Siddhesh Pawar
Arnav Arora
Lucie-Aimée Kaffee
Isabelle Augenstein
430
6
0
17 Feb 2025
Refining Input Guardrails: Enhancing LLM-as-a-Judge Efficiency Through Chain-of-Thought Fine-Tuning and Alignment
Refining Input Guardrails: Enhancing LLM-as-a-Judge Efficiency Through Chain-of-Thought Fine-Tuning and Alignment
Melissa Kazemi Rad
Huy Nghiem
Andy Luo
Sahil Wadhwa
Mohammad Sorower
Stephen Rawls
AAML
299
18
0
22 Jan 2025
Natural Language Processing for Human Resources: A Survey
Natural Language Processing for Human Resources: A SurveyNorth American Chapter of the Association for Computational Linguistics (NAACL), 2024
Naoki Otani
Nikita Bhutani
Estevam R. Hruschka
VLM
374
8
0
21 Oct 2024
Spoken Stereoset: On Evaluating Social Bias Toward Speaker in Speech
  Large Language Models
Spoken Stereoset: On Evaluating Social Bias Toward Speaker in Speech Large Language ModelsSpoken Language Technology Workshop (SLT), 2024
Yi-Cheng Lin
Wei-Chih Chen
Hung-yi Lee
201
11
0
14 Aug 2024
What's in a Name? Auditing Large Language Models for Race and Gender Bias
What's in a Name? Auditing Large Language Models for Race and Gender Bias
Amit Haim
Alejandro Salinas
Julian Nyarko
417
59
0
21 Feb 2024
1