ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.05378
  4. Cited By
"They are uncultured": Unveiling Covert Harms and Social Threats in LLM
  Generated Conversations

"They are uncultured": Unveiling Covert Harms and Social Threats in LLM Generated Conversations

8 May 2024
Preetam Prabhu Srikar Dammu
Hayoung Jung
Anjali Singh
Monojit Choudhury
Tanushree Mitra
ArXivPDFHTML

Papers citing ""They are uncultured": Unveiling Covert Harms and Social Threats in LLM Generated Conversations"

13 / 13 papers shown
Title
Personalisation or Prejudice? Addressing Geographic Bias in Hate Speech Detection using Debias Tuning in Large Language Models
Personalisation or Prejudice? Addressing Geographic Bias in Hate Speech Detection using Debias Tuning in Large Language Models
Paloma Piot
Patricia Martín-Rodilla
Javier Parapar
41
0
0
04 May 2025
Characterizing LLM-driven Social Network: The Chirper.ai Case
Characterizing LLM-driven Social Network: The Chirper.ai Case
Yiming Zhu
Yupeng He
Ehsan-ul Haq
Gareth Tyson
Pan Hui
LLMAG
34
0
0
14 Apr 2025
Social Bias Benchmark for Generation: A Comparison of Generation and QA-Based Evaluations
Jiho Jin
Woosung Kang
Junho Myung
Alice H. Oh
41
0
0
10 Mar 2025
Examining Human-AI Collaboration for Co-Writing Constructive Comments
  Online
Examining Human-AI Collaboration for Co-Writing Constructive Comments Online
Farhana Shahid
Maximilian Dittgen
Mor Naaman
Aditya Vashistha
35
1
0
05 Nov 2024
Algorithmic Behaviors Across Regions: A Geolocation Audit of YouTube Search for COVID-19 Misinformation Between the United States and South Africa
Algorithmic Behaviors Across Regions: A Geolocation Audit of YouTube Search for COVID-19 Misinformation Between the United States and South Africa
Hayoung Jung
Prerna Juneja
Tanushree Mitra
MLAU
63
0
0
16 Sep 2024
ValueCompass: A Framework for Measuring Contextual Value Alignment Between Human and LLMs
ValueCompass: A Framework for Measuring Contextual Value Alignment Between Human and LLMs
Hua Shen
Tiffany Knearem
Reshmi Ghosh
Yu-Ju Yang
Tanushree Mitra
Yun Huang
Yun Huang
48
0
0
15 Sep 2024
Towards Measuring and Modeling "Culture" in LLMs: A Survey
Towards Measuring and Modeling "Culture" in LLMs: A Survey
Muhammad Farid Adilazuarda
Sagnik Mukherjee
Pradhyumna Lavania
Siddhant Singh
Alham Fikri Aji
Jacki OÑeill
Ashutosh Modi
Monojit Choudhury
52
53
0
05 Mar 2024
Dialect prejudice predicts AI decisions about people's character,
  employability, and criminality
Dialect prejudice predicts AI decisions about people's character, employability, and criminality
Valentin Hofmann
Pratyusha Kalluri
Dan Jurafsky
Sharese King
78
39
0
01 Mar 2024
Can Large Language Models Be an Alternative to Human Evaluations?
Can Large Language Models Be an Alternative to Human Evaluations?
Cheng-Han Chiang
Hung-yi Lee
ALM
LM&MA
209
568
0
03 May 2023
Co-Writing Screenplays and Theatre Scripts with Language Models: An
  Evaluation by Industry Professionals
Co-Writing Screenplays and Theatre Scripts with Language Models: An Evaluation by Industry Professionals
Piotr Wojciech Mirowski
Kory W. Mathewson
Jaylen Pittman
Richard Evans
HAI
53
250
0
29 Sep 2022
Challenges in Detoxifying Language Models
Challenges in Detoxifying Language Models
Johannes Welbl
Amelia Glaese
J. Uesato
Sumanth Dathathri
John F. J. Mellor
Lisa Anne Hendricks
Kirsty Anderson
Pushmeet Kohli
Ben Coppin
Po-Sen Huang
LM&MA
242
193
0
15 Sep 2021
The Perils of Using Mechanical Turk to Evaluate Open-Ended Text
  Generation
The Perils of Using Mechanical Turk to Evaluate Open-Ended Text Generation
Marzena Karpinska
Nader Akoury
Mohit Iyyer
204
106
0
14 Sep 2021
A Survey on Bias and Fairness in Machine Learning
A Survey on Bias and Fairness in Machine Learning
Ninareh Mehrabi
Fred Morstatter
N. Saxena
Kristina Lerman
Aram Galstyan
SyDa
FaML
294
4,187
0
23 Aug 2019
1