ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.07187
  4. Cited By
Semantics derived automatically from language corpora contain human-like
  biases
v1v2v3v4 (latest)

Semantics derived automatically from language corpora contain human-like biases

25 August 2016
Aylin Caliskan
J. Bryson
Arvind Narayanan
ArXiv (abs)PDFHTML

Papers citing "Semantics derived automatically from language corpora contain human-like biases"

50 / 512 papers shown
Title
How to Train your Text-to-Image Model: Evaluating Design Choices for Synthetic Training Captions
How to Train your Text-to-Image Model: Evaluating Design Choices for Synthetic Training Captions
Manuel Brack
Sudeep Katakol
Felix Friedrich
P. Schramowski
Hareesh Ravi
Kristian Kersting
Ajinkya Kale
20
0
0
20 Jun 2025
Exploring Cultural Variations in Moral Judgments with Large Language Models
Exploring Cultural Variations in Moral Judgments with Large Language Models
Hadi Mohammadi
Efthymia Papadopoulou
Yasmeen F.S.S. Meijer
Ayoub Bagheri
25
0
0
14 Jun 2025
Addressing Bias in LLMs: Strategies and Application to Fair AI-based Recruitment
Addressing Bias in LLMs: Strategies and Application to Fair AI-based Recruitment
Alejandro Peña
Julian Fierrez
Aythami Morales
Gonzalo Mancera
Miguel Lopez
Ruben Tolosana
22
0
0
13 Jun 2025
Robustly Improving LLM Fairness in Realistic Settings via Interpretability
Robustly Improving LLM Fairness in Realistic Settings via Interpretability
Adam Karvonen
Samuel Marks
106
0
0
12 Jun 2025
Surface Fairness, Deep Bias: A Comparative Study of Bias in Language Models
Surface Fairness, Deep Bias: A Comparative Study of Bias in Language Models
Aleksandra Sorokovikova
Pavel Chizhov
Iuliia Eremenko
Ivan P. Yamshchikov
98
0
0
12 Jun 2025
Biases Propagate in Encoder-based Vision-Language Models: A Systematic Analysis From Intrinsic Measures to Zero-shot Retrieval Outcomes
Biases Propagate in Encoder-based Vision-Language Models: A Systematic Analysis From Intrinsic Measures to Zero-shot Retrieval Outcomes
Kshitish Ghate
Tessa E. S. Charlesworth
Mona Diab
Aylin Caliskan
VLM
12
0
0
06 Jun 2025
Words of Warmth: Trust and Sociability Norms for over 26k English Words
Words of Warmth: Trust and Sociability Norms for over 26k English Words
Saif M. Mohammad
52
0
0
04 Jun 2025
Understanding and Meeting Practitioner Needs When Measuring Representational Harms Caused by LLM-Based Systems
Emma Harvey
Emily Sheng
Su Lin Blodgett
Alexandra Chouldechova
Jean Garcia-Gathright
Alexandra Olteanu
Hanna M. Wallach
32
1
0
04 Jun 2025
QA-HFL: Quality-Aware Hierarchical Federated Learning for Resource-Constrained Mobile Devices with Heterogeneous Image Quality
QA-HFL: Quality-Aware Hierarchical Federated Learning for Resource-Constrained Mobile Devices with Heterogeneous Image Quality
Sajid Hussain
Muhammad Sohail
Nauman Ali Khan
19
0
0
04 Jun 2025
Translate With Care: Addressing Gender Bias, Neutrality, and Reasoning in Large Language Model Translations
Translate With Care: Addressing Gender Bias, Neutrality, and Reasoning in Large Language Model Translations
Pardis Sadat Zahraei
Ali Emami
19
0
0
31 May 2025
Think Again! The Effect of Test-Time Compute on Preferences, Opinions, and Beliefs of Large Language Models
Think Again! The Effect of Test-Time Compute on Preferences, Opinions, and Beliefs of Large Language Models
George Kour
Itay Nakash
Ateret Anaby-Tavor
Michal Shmueli-Scheuer
166
0
0
26 May 2025
Surfacing Semantic Orthogonality Across Model Safety Benchmarks: A Multi-Dimensional Analysis
Surfacing Semantic Orthogonality Across Model Safety Benchmarks: A Multi-Dimensional Analysis
Jonathan Bennion
Shaona Ghosh
Mantek Singh
Nouha Dziri
172
0
0
23 May 2025
DECASTE: Unveiling Caste Stereotypes in Large Language Models through Multi-Dimensional Bias Analysis
DECASTE: Unveiling Caste Stereotypes in Large Language Models through Multi-Dimensional Bias Analysis
Prashanth Vijayaraghavan
Soroush Vosoughi
Lamogha Chizor
Raya Horesh
Rogerio Abreu de Paula
Ehsan Degan
Vandana Mukherjee
81
1
0
20 May 2025
Wisdom from Diversity: Bias Mitigation Through Hybrid Human-LLM Crowds
Wisdom from Diversity: Bias Mitigation Through Hybrid Human-LLM Crowds
Axel Abels
Tom Lenaerts
56
0
0
18 May 2025
Decoding the Mind of Large Language Models: A Quantitative Evaluation of Ideology and Biases
Decoding the Mind of Large Language Models: A Quantitative Evaluation of Ideology and Biases
Manari Hirose
Masato Uchida
32
0
0
18 May 2025
Gender and Positional Biases in LLM-Based Hiring Decisions: Evidence from Comparative CV/Résumé Evaluations
Gender and Positional Biases in LLM-Based Hiring Decisions: Evidence from Comparative CV/Résumé Evaluations
David Rozado
25
1
0
16 May 2025
Developing A Framework to Support Human Evaluation of Bias in Generated Free Response Text
Developing A Framework to Support Human Evaluation of Bias in Generated Free Response Text
Jennifer Healey
Laurie Byrum
Md Nadeem Akhtar
Surabhi Bhargava
Moumita Sinha
58
0
0
05 May 2025
Whence Is A Model Fair? Fixing Fairness Bugs via Propensity Score Matching
Whence Is A Model Fair? Fixing Fairness Bugs via Propensity Score Matching
Kewen Peng
Yicheng Yang
Hao Zhuo
111
0
0
23 Apr 2025
Evaluating how LLM annotations represent diverse views on contentious topics
Evaluating how LLM annotations represent diverse views on contentious topics
Megan A. Brown
Shubham Atreja
Libby Hemphill
Patrick Y. Wu
427
0
0
29 Mar 2025
An evaluation of LLMs and Google Translate for translation of selected Indian languages via sentiment and semantic analyses
An evaluation of LLMs and Google Translate for translation of selected Indian languages via sentiment and semantic analyses
Rohitash Chandra
Aryan Chaudhary
Yeshwanth Rayavarapu
126
0
0
27 Mar 2025
Attention IoU: Examining Biases in CelebA using Attention Maps
Attention IoU: Examining Biases in CelebA using Attention Maps
Aaron Serianni
Tyler Zhu
Olga Russakovsky
V. V. Ramaswamy
104
0
0
25 Mar 2025
Implicit Bias-Like Patterns in Reasoning Models
Implicit Bias-Like Patterns in Reasoning Models
Messi H.J. Lee
Calvin K. Lai
LRM
118
0
0
14 Mar 2025
Implicit Bias in LLMs: A Survey
Xinru Lin
Luyang Li
97
3
0
04 Mar 2025
Rethinking LLM Bias Probing Using Lessons from the Social Sciences
Kirsten N. Morehouse
S. Swaroop
Weiwei Pan
116
2
0
28 Feb 2025
Encoding Inequity: Examining Demographic Bias in LLM-Driven Robot Caregiving
Raj Korpan
73
0
0
24 Feb 2025
Benchmarking the rationality of AI decision making using the transitivity axiom
Benchmarking the rationality of AI decision making using the transitivity axiom
Kiwon Song
James M. Jennings III
Clintin P. Davis-Stober
75
1
0
14 Feb 2025
Intrinsic Bias is Predicted by Pretraining Data and Correlates with Downstream Performance in Vision-Language Encoders
Intrinsic Bias is Predicted by Pretraining Data and Correlates with Downstream Performance in Vision-Language Encoders
Kshitish Ghate
Isaac Slaughter
Kyra Wilson
Mona Diab
Aylin Caliskan
209
1
0
11 Feb 2025
Addressing Bias in Generative AI: Challenges and Research Opportunities in Information Management
Addressing Bias in Generative AI: Challenges and Research Opportunities in Information Management
Xiahua Wei
Naveen Kumar
Han Zhang
133
8
0
22 Jan 2025
Enhancing Patient-Centric Communication: Leveraging LLMs to Simulate Patient Perspectives
Enhancing Patient-Centric Communication: Leveraging LLMs to Simulate Patient Perspectives
Xinyao Ma
Rui Zhu
Zihao Wang
Jingwei Xiong
Qingyu Chen
Haixu Tang
L. Jean Camp
Lucila Ohno-Machado
LM&MA
86
0
0
12 Jan 2025
Explicit vs. Implicit: Investigating Social Bias in Large Language Models through Self-Reflection
Explicit vs. Implicit: Investigating Social Bias in Large Language Models through Self-Reflection
Yachao Zhao
Bo Wang
Yan Wang
Dongming Zhao
Ruifang He
Yuexian Hou
144
4
0
04 Jan 2025
ValuesRAG: Enhancing Cultural Alignment Through Retrieval-Augmented Contextual Learning
ValuesRAG: Enhancing Cultural Alignment Through Retrieval-Augmented Contextual Learning
Wonduk Seo
Hyunjin An
Yi Bu
VLM
176
3
0
02 Jan 2025
Towards Open-Vocabulary Remote Sensing Image Semantic Segmentation
Towards Open-Vocabulary Remote Sensing Image Semantic Segmentation
Chengyang Ye
Yunzhi Zhuge
Pingping Zhang
VLM
75
0
0
27 Dec 2024
Perception of Visual Content: Differences Between Humans and Foundation Models
Perception of Visual Content: Differences Between Humans and Foundation Models
Nardiena A. Pratama
Shaoyang Fan
Gianluca Demartini
VLM
165
0
0
28 Nov 2024
Profiling Bias in LLMs: Stereotype Dimensions in Contextual Word Embeddings
Profiling Bias in LLMs: Stereotype Dimensions in Contextual Word Embeddings
Carolin M. Schuster
Maria-Alexandra Dinisor
Shashwat Ghatiwala
Georg Groh
162
2
0
25 Nov 2024
FairMT-Bench: Benchmarking Fairness for Multi-turn Dialogue in Conversational LLMs
FairMT-Bench: Benchmarking Fairness for Multi-turn Dialogue in Conversational LLMs
Zhiting Fan
Ruizhe Chen
Tianxiang Hu
Zuozhu Liu
74
13
0
25 Oct 2024
Enabling Scalable Evaluation of Bias Patterns in Medical LLMs
Enabling Scalable Evaluation of Bias Patterns in Medical LLMs
Hamed Fayyaz
Raphael Poulain
Rahmatollah Beheshti
104
2
0
18 Oct 2024
LLMs are Biased Teachers: Evaluating LLM Bias in Personalized Education
LLMs are Biased Teachers: Evaluating LLM Bias in Personalized Education
Iain Xie Weissburg
Sathvika Anand
Sharon Levy
Haewon Jeong
212
8
0
17 Oct 2024
Aggregation Artifacts in Subjective Tasks Collapse Large Language Models' Posteriors
Aggregation Artifacts in Subjective Tasks Collapse Large Language Models' Posteriors
Georgios Chochlakis
Alexandros Potamianos
Kristina Lerman
Shrikanth Narayanan
151
2
0
17 Oct 2024
Stereotype or Personalization? User Identity Biases Chatbot Recommendations
Stereotype or Personalization? User Identity Biases Chatbot Recommendations
Anjali Kantharuban
Jeremiah Milbauer
Emma Strubell
Emma Strubell
Graham Neubig
103
15
0
08 Oct 2024
Collapsed Language Models Promote Fairness
Collapsed Language Models Promote Fairness
Jingxuan Xu
Wuyang Chen
Linyi Li
Yao Zhao
Yunchao Wei
116
0
0
06 Oct 2024
Mitigating Propensity Bias of Large Language Models for Recommender Systems
Mitigating Propensity Bias of Large Language Models for Recommender Systems
Guixian Zhang
Guan Yuan
Debo Cheng
Lin Liu
Jiuyong Li
Shichao Zhang
109
5
0
30 Sep 2024
Identity-related Speech Suppression in Generative AI Content Moderation
Identity-related Speech Suppression in Generative AI Content Moderation
Oghenefejiro Isaacs Anigboro
Charlie M. Crawford
Danaë Metaxa
Sorelle A. Friedler
Sorelle A. Friedler
143
0
0
09 Sep 2024
Does Liking Yellow Imply Driving a School Bus? Semantic Leakage in Language Models
Does Liking Yellow Imply Driving a School Bus? Semantic Leakage in Language Models
Hila Gonen
Terra Blevins
Alisa Liu
Luke Zettlemoyer
Noah A. Smith
142
5
0
12 Aug 2024
Vectoring Languages
Vectoring Languages
Joseph Chen
58
0
0
16 Jul 2024
Bringing AI Participation Down to Scale: A Comment on Open AIs Democratic Inputs to AI Project
Bringing AI Participation Down to Scale: A Comment on Open AIs Democratic Inputs to AI Project
David Moats
Chandrima Ganguly
VLM
61
0
0
16 Jul 2024
CEB: Compositional Evaluation Benchmark for Fairness in Large Language Models
CEB: Compositional Evaluation Benchmark for Fairness in Large Language Models
Song Wang
Peng Wang
Tong Zhou
Yushun Dong
Zhen Tan
Jundong Li
CoGe
156
9
0
02 Jul 2024
GenderBias-\emph{VL}: Benchmarking Gender Bias in Vision Language Models
  via Counterfactual Probing
GenderBias-\emph{VL}: Benchmarking Gender Bias in Vision Language Models via Counterfactual Probing
Yisong Xiao
Aishan Liu
QianJia Cheng
Zhenfei Yin
Siyuan Liang
Jiapeng Li
Jing Shao
Xianglong Liu
Dacheng Tao
124
8
0
30 Jun 2024
Large Language Models are Biased Because They Are Large Language Models
Large Language Models are Biased Because They Are Large Language Models
Philip Resnik
50
10
0
19 Jun 2024
Culturally Aware and Adapted NLP: A Taxonomy and a Survey of the State of the Art
Culturally Aware and Adapted NLP: A Taxonomy and a Survey of the State of the Art
Chen Cecilia Liu
Iryna Gurevych
Anna Korhonen
188
6
0
06 Jun 2024
Exploring Subjectivity for more Human-Centric Assessment of Social
  Biases in Large Language Models
Exploring Subjectivity for more Human-Centric Assessment of Social Biases in Large Language Models
Paula Akemi Aoyagui
Sharon Ferguson
Anastasia Kuzminykh
81
0
0
17 May 2024
1234...91011
Next