ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.14096
  4. Cited By
Cultural Bias and Cultural Alignment of Large Language Models

Cultural Bias and Cultural Alignment of Large Language Models

23 November 2023
Yan Tao
Olga Viberg
Ryan S. Baker
René F. Kizilcec
    ELM
ArXivPDFHTML

Papers citing "Cultural Bias and Cultural Alignment of Large Language Models"

13 / 13 papers shown
Title
Navigating the Rabbit Hole: Emergent Biases in LLM-Generated Attack Narratives Targeting Mental Health Groups
Navigating the Rabbit Hole: Emergent Biases in LLM-Generated Attack Narratives Targeting Mental Health Groups
Rijul Magu
Arka Dutta
Sean Kim
Ashiqur R. KhudaBukhsh
Munmun De Choudhury
19
0
0
08 Apr 2025
Designing Speech Technologies for Australian Aboriginal English: Opportunities, Risks and Participation
Designing Speech Technologies for Australian Aboriginal English: Opportunities, Risks and Participation
Ben Hutchinson
Celeste Rodríguez Louro
Glenys Collard
Ned Cooper
57
0
0
05 Mar 2025
Surveying Attitudinal Alignment Between Large Language Models Vs. Humans Towards 17 Sustainable Development Goals
Surveying Attitudinal Alignment Between Large Language Models Vs. Humans Towards 17 Sustainable Development Goals
Qingyang Wu
Ying Xu
Tingsong Xiao
Yunze Xiao
Yitong Li
...
Yichi Zhang
Shanghai Zhong
Yuwei Zhang
Wei Lu
Yifan Yang
70
1
0
17 Jan 2025
ValuesRAG: Enhancing Cultural Alignment Through Retrieval-Augmented Contextual Learning
ValuesRAG: Enhancing Cultural Alignment Through Retrieval-Augmented Contextual Learning
Wonduk Seo
Zonghao Yuan
Yi Bu
VLM
48
0
0
02 Jan 2025
One Language, Many Gaps: Evaluating Dialect Fairness and Robustness of Large Language Models in Reasoning Tasks
One Language, Many Gaps: Evaluating Dialect Fairness and Robustness of Large Language Models in Reasoning Tasks
Fangru Lin
Shaoguang Mao
Emanuele La Malfa
Valentin Hofmann
Adrian de Wynter
Jing Yao
Si-Qing Chen
Michael Wooldridge
Furu Wei
Furu Wei
46
2
0
14 Oct 2024
Controllable Safety Alignment: Inference-Time Adaptation to Diverse Safety Requirements
Controllable Safety Alignment: Inference-Time Adaptation to Diverse Safety Requirements
Jingyu Zhang
Ahmed Elgohary
Ahmed Magooda
Daniel Khashabi
Benjamin Van Durme
50
2
0
11 Oct 2024
Exposing Assumptions in AI Benchmarks through Cognitive Modelling
Exposing Assumptions in AI Benchmarks through Cognitive Modelling
Jonathan H. Rystrøm
Kenneth C. Enevoldsen
32
0
0
25 Sep 2024
AI Suggestions Homogenize Writing Toward Western Styles and Diminish Cultural Nuances
AI Suggestions Homogenize Writing Toward Western Styles and Diminish Cultural Nuances
Dhruv Agarwal
Mor Naaman
Aditya Vashistha
29
13
0
17 Sep 2024
ExploreSelf: Fostering User-driven Exploration and Reflection on Personal Challenges with Adaptive Guidance by Large Language Models
ExploreSelf: Fostering User-driven Exploration and Reflection on Personal Challenges with Adaptive Guidance by Large Language Models
Inhwa Song
SoHyun Park
Sachin R. Pendse
J. Schleider
Munmun De Choudhury
Young-Ho Kim
24
0
0
15 Sep 2024
Inverse Constitutional AI: Compressing Preferences into Principles
Inverse Constitutional AI: Compressing Preferences into Principles
Arduin Findeis
Timo Kaufmann
Eyke Hüllermeier
Samuel Albanie
Robert Mullins
SyDa
41
8
0
02 Jun 2024
CRAFT: Extracting and Tuning Cultural Instructions from the Wild
CRAFT: Extracting and Tuning Cultural Instructions from the Wild
Bin Wang
Geyu Lin
Zhengyuan Liu
Chengwei Wei
Nancy F. Chen
29
3
0
06 May 2024
OMGEval: An Open Multilingual Generative Evaluation Benchmark for Large
  Language Models
OMGEval: An Open Multilingual Generative Evaluation Benchmark for Large Language Models
Yang Janet Liu
Meng Xu
Shuo Wang
Liner Yang
Haoyu Wang
...
Cunliang Kong
Yun-Nung Chen
Yang Liu
Maosong Sun
Erhong Yang
ELM
LRM
36
1
0
21 Feb 2024
Black-Box Access is Insufficient for Rigorous AI Audits
Black-Box Access is Insufficient for Rigorous AI Audits
Stephen Casper
Carson Ezell
Charlotte Siegmann
Noam Kolt
Taylor Lynn Curtis
...
Michael Gerovitch
David Bau
Max Tegmark
David M. Krueger
Dylan Hadfield-Menell
AAML
13
75
0
25 Jan 2024
1