ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.03086
  4. Cited By
Auditing the Use of Language Models to Guide Hiring Decisions

Auditing the Use of Language Models to Guide Hiring Decisions

3 April 2024
Johann D. Gaebler
Sharad Goel
Aziz Huq
Prasanna Tambe
    MLAU
ArXivPDFHTML

Papers citing "Auditing the Use of Language Models to Guide Hiring Decisions"

7 / 7 papers shown
Title
Evaluating Bias in LLMs for Job-Resume Matching: Gender, Race, and Education
Evaluating Bias in LLMs for Job-Resume Matching: Gender, Race, and Education
Hayate Iso
Pouya Pezeshkpour
Nikita Bhutani
Estevam R. Hruschka
58
0
0
24 Mar 2025
Hiring under Congestion and Algorithmic Monoculture: Value of Strategic Behavior
Hiring under Congestion and Algorithmic Monoculture: Value of Strategic Behavior
Jackie Baek
Hamsa Bastani
Shihan Chen
36
0
0
27 Feb 2025
Towards Effective Discrimination Testing for Generative AI
Towards Effective Discrimination Testing for Generative AI
Thomas P. Zollo
Nikita Rajaneesh
Richard Zemel
Talia B. Gillis
Emily Black
30
1
0
31 Dec 2024
Unboxing Occupational Bias: Grounded Debiasing of LLMs with U.S. Labor
  Data
Unboxing Occupational Bias: Grounded Debiasing of LLMs with U.S. Labor Data
Atmika Gorti
Manas Gaur
Aman Chadha
20
2
0
20 Aug 2024
JobFair: A Framework for Benchmarking Gender Hiring Bias in Large
  Language Models
JobFair: A Framework for Benchmarking Gender Hiring Bias in Large Language Models
Ze Wang
Zekun Wu
Xin Guan
Michael Thaler
Adriano Soares Koshiyama
Skylar Lu
Sachin Beepath
Ediz Ertekin Jr.
Maria Perez-Ortiz
32
4
0
17 Jun 2024
Do Large Language Models Discriminate in Hiring Decisions on the Basis
  of Race, Ethnicity, and Gender?
Do Large Language Models Discriminate in Hiring Decisions on the Basis of Race, Ethnicity, and Gender?
Haozhe An
Christabel Acquaye
Colin Wang
Zongxia Li
Rachel Rudinger
31
12
0
15 Jun 2024
Fair prediction with disparate impact: A study of bias in recidivism
  prediction instruments
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
185
2,079
0
24 Oct 2016
1