ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.07899
  4. Cited By
Artificial Artificial Artificial Intelligence: Crowd Workers Widely Use
  Large Language Models for Text Production Tasks

Artificial Artificial Artificial Intelligence: Crowd Workers Widely Use Large Language Models for Text Production Tasks

13 June 2023
V. Veselovsky
Manoel Horta Ribeiro
Robert West
ArXivPDFHTML

Papers citing "Artificial Artificial Artificial Intelligence: Crowd Workers Widely Use Large Language Models for Text Production Tasks"

50 / 88 papers shown
Title
LLM Social Simulations Are a Promising Research Method
LLM Social Simulations Are a Promising Research Method
Jacy Reese Anthis
Ryan Liu
Sean M. Richardson
Austin C. Kozlowski
Bernard Koch
James A. Evans
Erik Brynjolfsson
Michael S. Bernstein
ALM
51
4
0
03 Apr 2025
The Risks of Using Large Language Models for Text Annotation in Social Science Research
The Risks of Using Large Language Models for Text Annotation in Social Science Research
Hao Lin
Yongjun Zhang
26
2
0
27 Mar 2025
An evaluation of LLMs and Google Translate for translation of selected Indian languages via sentiment and semantic analyses
An evaluation of LLMs and Google Translate for translation of selected Indian languages via sentiment and semantic analyses
Rohitash Chandra
Aryan Chaudhary
Yeshwanth Rayavarapu
44
0
0
27 Mar 2025
Improving User Behavior Prediction: Leveraging Annotator Metadata in Supervised Machine Learning Models
Improving User Behavior Prediction: Leveraging Annotator Metadata in Supervised Machine Learning Models
Lynnette Ng
Kokil Jaidka
Kaiyuan Tay
Hansin Ahuja
Niyati Chhaya
46
0
0
26 Mar 2025
R.U.Psycho? Robust Unified Psychometric Testing of Language Models
Julian Schelb
Orr Borin
David Garcia
Andreas Spitz
37
0
0
13 Mar 2025
Biases in Large Language Model-Elicited Text: A Case Study in Natural Language Inference
Grace Proebsting
Adam Poliak
50
0
0
06 Mar 2025
WildFrame: Comparing Framing in Humans and LLMs on Naturally Occurring Texts
WildFrame: Comparing Framing in Humans and LLMs on Naturally Occurring Texts
Gili Lior
Liron Nacchace
Gabriel Stanovsky
56
0
0
24 Feb 2025
Economics of Sourcing Human Data
Economics of Sourcing Human Data
Sebastin Santy
Prasanta Bhattacharya
Manoel Horta Ribeiro
Kelsey Allen
Sewoong Oh
69
0
0
11 Feb 2025
LLMs as Workers in Human-Computational Algorithms? Replicating Crowdsourcing Pipelines with LLMs
LLMs as Workers in Human-Computational Algorithms? Replicating Crowdsourcing Pipelines with LLMs
Tongshuang Wu
Haiyi Zhu
Maya Albayrak
Alexis Axon
Amanda Bertsch
...
Ying-Jui Tseng
Patricia Vaidos
Zhijin Wu
Wei Yu Wu
Chenyang Yang
76
30
0
10 Jan 2025
Are LLMs Better than Reported? Detecting Label Errors and Mitigating
  Their Effect on Model Performance
Are LLMs Better than Reported? Detecting Label Errors and Mitigating Their Effect on Model Performance
Omer Nahum
Nitay Calderon
Orgad Keller
Idan Szpektor
Roi Reichart
23
1
0
24 Oct 2024
Does Data Contamination Detection Work (Well) for LLMs? A Survey and Evaluation on Detection Assumptions
Does Data Contamination Detection Work (Well) for LLMs? A Survey and Evaluation on Detection Assumptions
Yujuan Fu
Özlem Uzuner
Meliha Yetisgen
Fei Xia
55
3
0
24 Oct 2024
Human-LLM Hybrid Text Answer Aggregation for Crowd Annotations
Human-LLM Hybrid Text Answer Aggregation for Crowd Annotations
Jiyi Li
27
1
0
22 Oct 2024
Human-LLM Collaborative Construction of a Cantonese Emotion Lexicon
Human-LLM Collaborative Construction of a Cantonese Emotion Lexicon
Yusong Zhang
Dong Dong
Chi-tim Hung
Leonard Heyerdahl
Tamara Giles-Vernick
Eng-kiong Yeoh
18
0
0
15 Oct 2024
CoCoLoFa: A Dataset of News Comments with Common Logical Fallacies
  Written by LLM-Assisted Crowds
CoCoLoFa: A Dataset of News Comments with Common Logical Fallacies Written by LLM-Assisted Crowds
Min-Hsuan Yeh
Ruyuan Wan
Ting-Hao 'Kenneth' Huang
15
1
0
04 Oct 2024
Can Unconfident LLM Annotations Be Used for Confident Conclusions?
Can Unconfident LLM Annotations Be Used for Confident Conclusions?
Kristina Gligorić
Tijana Zrnic
Cinoo Lee
Emmanuel J. Candès
Dan Jurafsky
66
4
0
27 Aug 2024
Soda-Eval: Open-Domain Dialogue Evaluation in the age of LLMs
Soda-Eval: Open-Domain Dialogue Evaluation in the age of LLMs
John Mendonça
Isabel Trancoso
A. Lavie
ALM
29
1
0
20 Aug 2024
Surveys Considered Harmful? Reflecting on the Use of Surveys in AI
  Research, Development, and Governance
Surveys Considered Harmful? Reflecting on the Use of Surveys in AI Research, Development, and Governance
Mohammmad Tahaei
Daricia Wilkinson
Alisa Frik
Chi Lok Yu
Ruba Abu-Salma
Lauren Wilcox
27
3
0
26 Jul 2024
Quality Assured: Rethinking Annotation Strategies in Imaging AI
Quality Assured: Rethinking Annotation Strategies in Imaging AI
Tim Radsch
Annika Reinke
V. Weru
M. Tizabi
Nicholas Heller
Fabian Isensee
Annette Kopp-Schneider
Lena Maier-Hein
22
1
0
24 Jul 2024
M2QA: Multi-domain Multilingual Question Answering
M2QA: Multi-domain Multilingual Question Answering
Leon Arne Engländer
Hannah Sterz
Clifton A. Poth
Jonas Pfeiffer
Ilia Kuznetsov
Iryna Gurevych
VLM
33
1
0
01 Jul 2024
Knowledge Distillation in Automated Annotation: Supervised Text
  Classification with LLM-Generated Training Labels
Knowledge Distillation in Automated Annotation: Supervised Text Classification with LLM-Generated Training Labels
Nicholas Pangakis
Samuel Wolken
27
15
0
25 Jun 2024
Assessing Good, Bad and Ugly Arguments Generated by ChatGPT: a New
  Dataset, its Methodology and Associated Tasks
Assessing Good, Bad and Ugly Arguments Generated by ChatGPT: a New Dataset, its Methodology and Associated Tasks
Victor Hugo Nascimento Rocha
I. Silveira
Paulo Pirozelli
Denis Deratani Mauá
Fabio Gagliardi Cozman
18
0
0
21 Jun 2024
AI-Assisted Human Evaluation of Machine Translation
AI-Assisted Human Evaluation of Machine Translation
Vilém Zouhar
Tom Kocmi
Mrinmaya Sachan
30
4
0
18 Jun 2024
MemeGuard: An LLM and VLM-based Framework for Advancing Content
  Moderation via Meme Intervention
MemeGuard: An LLM and VLM-based Framework for Advancing Content Moderation via Meme Intervention
Prince Jha
Raghav Jain
Konika Mandal
Aman Chadha
Sriparna Saha
P. Bhattacharyya
19
6
0
08 Jun 2024
ACCORD: Closing the Commonsense Measurability Gap
ACCORD: Closing the Commonsense Measurability Gap
François Roewer-Després
Jinyue Feng
Zining Zhu
Frank Rudzicz
LRM
34
0
0
04 Jun 2024
Trust and Terror: Hazards in Text Reveal Negatively Biased Credulity and
  Partisan Negativity Bias
Trust and Terror: Hazards in Text Reveal Negatively Biased Credulity and Partisan Negativity Bias
Keith Burghardt
D. Fessler
Chyna Tang
Anne C. Pisor
Kristina Lerman
19
0
0
28 May 2024
"They are uncultured": Unveiling Covert Harms and Social Threats in LLM
  Generated Conversations
"They are uncultured": Unveiling Covert Harms and Social Threats in LLM Generated Conversations
Preetam Prabhu Srikar Dammu
Hayoung Jung
Anjali Singh
Monojit Choudhury
Tanushree Mitra
21
8
0
08 May 2024
"I'm Not Sure, But...": Examining the Impact of Large Language Models'
  Uncertainty Expression on User Reliance and Trust
"I'm Not Sure, But...": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust
Sunnie S. Y. Kim
Q. V. Liao
Mihaela Vorvoreanu
Steph Ballard
Jennifer Wortman Vaughan
32
50
0
01 May 2024
Data Authenticity, Consent, & Provenance for AI are all broken: what
  will it take to fix them?
Data Authenticity, Consent, & Provenance for AI are all broken: what will it take to fix them?
Shayne Longpre
Robert Mahari
Naana Obeng-Marnu
William Brannon
Tobin South
Katy Gero
Sandy Pentland
Jad Kabbara
51
5
0
19 Apr 2024
Edisum: Summarizing and Explaining Wikipedia Edits at Scale
Edisum: Summarizing and Explaining Wikipedia Edits at Scale
Marija Sakota
Isaac Johnson
Guosheng Feng
Robert West
SyDa
KELM
25
2
0
04 Apr 2024
Mapping the Increasing Use of LLMs in Scientific Papers
Mapping the Increasing Use of LLMs in Scientific Papers
Weixin Liang
Yaohui Zhang
Zhengxuan Wu
Haley Lepp
Wenlong Ji
...
Zhi Huang
Diyi Yang
Christopher Potts
Christopher D. Manning
James Y. Zou
AI4CE
DeLMO
30
57
0
01 Apr 2024
ChatGPT Rates Natural Language Explanation Quality Like Humans: But on
  Which Scales?
ChatGPT Rates Natural Language Explanation Quality Like Humans: But on Which Scales?
Fan Huang
Haewoon Kwak
Kunwoo Park
Jisun An
ALM
ELM
AI4MH
18
12
0
26 Mar 2024
Adaptive Ensembles of Fine-Tuned Transformers for LLM-Generated Text
  Detection
Adaptive Ensembles of Fine-Tuned Transformers for LLM-Generated Text Detection
Zhixin Lai
Xuesheng Zhang
Suiyao Chen
DeLMO
33
30
0
20 Mar 2024
Fairness Feedback Loops: Training on Synthetic Data Amplifies Bias
Fairness Feedback Loops: Training on Synthetic Data Amplifies Bias
Sierra Wyllie
Ilia Shumailov
Nicolas Papernot
27
25
0
12 Mar 2024
MAGID: An Automated Pipeline for Generating Synthetic Multi-modal
  Datasets
MAGID: An Automated Pipeline for Generating Synthetic Multi-modal Datasets
Hossein Aboutalebi
Hwanjun Song
Yusheng Xie
Arshit Gupta
Justin Sun
Hang Su
Igor Shalyminov
Nikolaos Pappas
Siffi Singh
Saab Mansour
DiffM
EGVM
46
4
0
05 Mar 2024
Counterspeakers' Perspectives: Unveiling Barriers and AI Needs in the
  Fight against Online Hate
Counterspeakers' Perspectives: Unveiling Barriers and AI Needs in the Fight against Online Hate
Jimin Mun
Cathy Buerger
Jenny T Liang
Joshua Garland
Maarten Sap
19
10
0
29 Feb 2024
AmbigNLG: Addressing Task Ambiguity in Instruction for NLG
AmbigNLG: Addressing Task Ambiguity in Instruction for NLG
Ayana Niwa
Hayate Iso
20
4
0
27 Feb 2024
Faithful Temporal Question Answering over Heterogeneous Sources
Faithful Temporal Question Answering over Heterogeneous Sources
Zhen Jia
Philipp Christmann
G. Weikum
25
9
0
23 Feb 2024
Watermarking Makes Language Models Radioactive
Watermarking Makes Language Models Radioactive
Tom Sander
Pierre Fernandez
Alain Durmus
Matthijs Douze
Teddy Furon
WaLM
29
11
0
22 Feb 2024
Beyond Probabilities: Unveiling the Misalignment in Evaluating Large
  Language Models
Beyond Probabilities: Unveiling the Misalignment in Evaluating Large Language Models
Chenyang Lyu
Minghao Wu
Alham Fikri Aji
ELM
27
13
0
21 Feb 2024
Shaping Human-AI Collaboration: Varied Scaffolding Levels in Co-writing
  with Language Models
Shaping Human-AI Collaboration: Varied Scaffolding Levels in Co-writing with Language Models
Paramveer S. Dhillon
Somayeh Molaei
Jiaqi Li
Maximilian Golub
Shaochun Zheng
Lionel P. Robert
LLMAG
46
40
0
18 Feb 2024
MONAL: Model Autophagy Analysis for Modeling Human-AI Interactions
MONAL: Model Autophagy Analysis for Modeling Human-AI Interactions
Shu Yang
Muhammad Asif Ali
Lu Yu
Lijie Hu
Di Wang
LLMAG
16
2
0
17 Feb 2024
AFaCTA: Assisting the Annotation of Factual Claim Detection with
  Reliable LLM Annotators
AFaCTA: Assisting the Annotation of Factual Claim Detection with Reliable LLM Annotators
Jingwei Ni
Minjing Shi
Dominik Stammbach
Mrinmaya Sachan
Elliott Ash
Markus Leippold
HILM
10
11
0
16 Feb 2024
Network Formation and Dynamics Among Multi-LLMs
Network Formation and Dynamics Among Multi-LLMs
Marios Papachristou
Yuan Yuan
36
11
0
16 Feb 2024
A Tale of Tails: Model Collapse as a Change of Scaling Laws
A Tale of Tails: Model Collapse as a Change of Scaling Laws
Elvis Dohmatob
Yunzhen Feng
Pu Yang
Francois Charton
Julia Kempe
19
62
0
10 Feb 2024
Diverse, but Divisive: LLMs Can Exaggerate Gender Differences in Opinion
  Related to Harms of Misinformation
Diverse, but Divisive: LLMs Can Exaggerate Gender Differences in Opinion Related to Harms of Misinformation
Terrence Neumann
Sooyong Lee
Maria De-Arteaga
S. Fazelpour
Matthew Lease
20
4
0
29 Jan 2024
A Comparative Study on Annotation Quality of Crowdsourcing and LLM via
  Label Aggregation
A Comparative Study on Annotation Quality of Crowdsourcing and LLM via Label Aggregation
Jiyi Li
20
16
0
18 Jan 2024
AiGen-FoodReview: A Multimodal Dataset of Machine-Generated Restaurant
  Reviews and Images on Social Media
AiGen-FoodReview: A Multimodal Dataset of Machine-Generated Restaurant Reviews and Images on Social Media
Alessandro Gambetti
Qiwei Han
DeLMO
18
3
0
16 Jan 2024
Evaluating Language Model Agency through Negotiations
Evaluating Language Model Agency through Negotiations
Tim R. Davidson
V. Veselovsky
Martin Josifoski
Maxime Peyrard
Antoine Bosselut
Michal Kosinski
Robert West
LLMAG
29
22
0
09 Jan 2024
ALMANACS: A Simulatability Benchmark for Language Model Explainability
ALMANACS: A Simulatability Benchmark for Language Model Explainability
Edmund Mills
Shiye Su
Stuart J. Russell
Scott Emmons
43
7
0
20 Dec 2023
Large Language Models Suffer From Their Own Output: An Analysis of the
  Self-Consuming Training Loop
Large Language Models Suffer From Their Own Output: An Analysis of the Self-Consuming Training Loop
Martin Briesch
Dominik Sobania
Franz Rothlauf
27
54
0
28 Nov 2023
12
Next