ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.11081
  4. Cited By
v1v2 (latest)

The Promises and Pitfalls of LLM Annotations in Dataset Labeling: a Case Study on Media Bias Detection

North American Chapter of the Association for Computational Linguistics (NAACL), 2024
17 November 2024
Tomas Horych
Christoph Mandl
Terry Ruas
André Greiner-Petter
Bela Gipp
Akiko Aizawa
Timo Spinde
ArXiv (abs)PDFHTML

Papers citing "The Promises and Pitfalls of LLM Annotations in Dataset Labeling: a Case Study on Media Bias Detection"

29 / 29 papers shown
Title
Hateful Person or Hateful Model? Investigating the Role of Personas in Hate Speech Detection by Large Language Models
Hateful Person or Hateful Model? Investigating the Role of Personas in Hate Speech Detection by Large Language Models
Shuzhou Yuan
Ercong Nie
Mario Tawfelis
Helmut Schmid
Hinrich Schütze
Michael Färber
119
1
0
10 Jun 2025
Leveraging Large Language Models for Automated Definition Extraction with TaxoMatic A Case Study on Media Bias
Leveraging Large Language Models for Automated Definition Extraction with TaxoMatic A Case Study on Media BiasInternational Conference on Web and Social Media (ICWSM), 2025
Timo Spinde
Luyang Lin
Smi Hinterreiter
Isao Echizen
157
0
0
01 Apr 2025
Through the LLM Looking Glass: A Socratic Probing of Donkeys, Elephants, and Markets
Through the LLM Looking Glass: A Socratic Probing of Donkeys, Elephants, and Markets
Molly Kennedy
Ayyoob Imani
Timo Spinde
Hinrich Schütze
157
0
0
20 Mar 2025
Prompting in the Dark: Assessing Human Performance in Prompt Engineering for Data Labeling When Gold Labels Are Absent
Prompting in the Dark: Assessing Human Performance in Prompt Engineering for Data Labeling When Gold Labels Are AbsentInternational Conference on Human Factors in Computing Systems (CHI), 2025
Zeyu He
Saniya Naphade
Ting-Hao 'Kenneth' Huang
151
3
0
16 Feb 2025
Sentence-level Media Bias Analysis with Event Relation Graph
Sentence-level Media Bias Analysis with Event Relation GraphNorth American Chapter of the Association for Computational Linguistics (NAACL), 2024
Yuanyuan Lei
Ruihong Huang
130
3
0
02 Apr 2024
MAGPIE: Multi-Task Media-Bias Analysis Generalization for Pre-Trained Identification of Expressions
MAGPIE: Multi-Task Media-Bias Analysis Generalization for Pre-Trained Identification of Expressions
Tomávs Horych
Martin Wessel
Jan Philip Wahle
Terry Ruas
Jerome Wassmuth
André Greiner-Petter
Akiko Aizawa
Bela Gipp
Timo Spinde
145
4
0
27 Feb 2024
IndiVec: An Exploration of Leveraging Large Language Models for Media
  Bias Detection with Fine-Grained Bias Indicators
IndiVec: An Exploration of Leveraging Large Language Models for Media Bias Detection with Fine-Grained Bias Indicators
Luyang Lin
Lingzhi Wang
Xiaoyan Zhao
Jing Li
Kam-Fai Wong
126
20
0
01 Feb 2024
The Media Bias Taxonomy: A Systematic Literature Review on the Forms and
  Automated Detection of Media Bias
The Media Bias Taxonomy: A Systematic Literature Review on the Forms and Automated Detection of Media Bias
Timo Spinde
Smilla Hinterreiter
Fabian Haak
Terry Ruas
Helge Giese
Norman Meuschke
Bela Gipp
146
15
0
26 Dec 2023
Zephyr: Direct Distillation of LM Alignment
Zephyr: Direct Distillation of LM Alignment
Lewis Tunstall
E. Beeching
Nathan Lambert
Nazneen Rajani
Kashif Rasul
...
Nathan Habib
Nathan Sarrazin
Omar Sanseviero
Alexander M. Rush
Thomas Wolf
ALM
222
471
0
25 Oct 2023
OpenChat: Advancing Open-source Language Models with Mixed-Quality Data
OpenChat: Advancing Open-source Language Models with Mixed-Quality DataInternational Conference on Learning Representations (ICLR), 2023
Guan-Bo Wang
Sijie Cheng
Xianyuan Zhan
Xiangang Li
Sen Song
Yang Liu
ALM
215
286
0
20 Sep 2023
Active Learning Principles for In-Context Learning with Large Language
  Models
Active Learning Principles for In-Context Learning with Large Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Katerina Margatina
Timo Schick
Nikolaos Aletras
Jane Dwivedi-Yu
185
57
0
23 May 2023
Missing Information, Unresponsive Authors, Experimental Flaws: The
  Impossibility of Assessing the Reproducibility of Previous Human Evaluations
  in NLP
Missing Information, Unresponsive Authors, Experimental Flaws: The Impossibility of Assessing the Reproducibility of Previous Human Evaluations in NLPFirst Workshop on Insights from Negative Results in NLP (Insights), 2023
Anya Belz
Craig Thomson
Ehud Reiter
Gavin Abercrombie
J. Alonso-Moral
...
Antonio Toral
Xiao-Yi Wan
Leo Wanner
Lewis J. Watson
Diyi Yang
180
40
0
02 May 2023
Introducing MBIB -- the first Media Bias Identification Benchmark Task
  and Dataset Collection
Introducing MBIB -- the first Media Bias Identification Benchmark Task and Dataset CollectionAnnual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), 2023
Martin Wessel
Tomávs Horych
Terry Ruas
Akiko Aizawa
Bela Gipp
Timo Spinde
105
28
0
25 Apr 2023
AnnoLLM: Making Large Language Models to Be Better Crowdsourced
  Annotators
AnnoLLM: Making Large Language Models to Be Better Crowdsourced AnnotatorsNorth American Chapter of the Association for Computational Linguistics (NAACL), 2023
Xingwei He
Zheng-Wen Lin
Yeyun Gong
Alex Jin
Hang Zhang
Chen Lin
Jian Jiao
Siu-Ming Yiu
Nan Duan
Weizhu Chen
186
221
0
29 Mar 2023
ChatGPT Outperforms Crowd-Workers for Text-Annotation Tasks
ChatGPT Outperforms Crowd-Workers for Text-Annotation TasksProceedings of the National Academy of Sciences of the United States of America (PNAS), 2023
Fabrizio Gilardi
Meysam Alizadeh
M. Kubli
AI4MH
308
1,104
0
27 Mar 2023
Is ChatGPT better than Human Annotators? Potential and Limitations of
  ChatGPT in Explaining Implicit Hate Speech
Is ChatGPT better than Human Annotators? Potential and Limitations of ChatGPT in Explaining Implicit Hate SpeechThe Web Conference (WWW), 2023
Fan Huang
Haewoon Kwak
Jisun An
AI4MH
170
285
0
11 Feb 2023
Exploiting Transformer-based Multitask Learning for the Detection of
  Media Bias in News Articles
Exploiting Transformer-based Multitask Learning for the Detection of Media Bias in News ArticlesiConference (iConference), 2022
Timo Spinde
Jan-David Krieger
Terry Ruas
Jelena Mitrović
Franz Götz-Hahn
Akiko Aizawa
Bela Gipp
123
28
0
07 Nov 2022
Scaling Instruction-Finetuned Language Models
Scaling Instruction-Finetuned Language ModelsJournal of machine learning research (JMLR), 2022
Hyung Won Chung
Le Hou
Shayne Longpre
Barret Zoph
Yi Tay
...
Jacob Devlin
Adam Roberts
Denny Zhou
Quoc V. Le
Jason W. Wei
ReLMLRM
699
3,559
0
20 Oct 2022
Neural Media Bias Detection Using Distant Supervision With BABE -- Bias
  Annotations By Experts
Neural Media Bias Detection Using Distant Supervision With BABE -- Bias Annotations By ExpertsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Timo Spinde
Manuel Plank
Jan-David Krieger
Terry Ruas
Bela Gipp
Akiko Aizawa
119
83
0
29 Sep 2022
A Domain-adaptive Pre-training Approach for Language Bias Detection in
  News
A Domain-adaptive Pre-training Approach for Language Bias Detection in NewsACM/IEEE Joint Conference on Digital Libraries (JCDL), 2022
Jan-David Krieger
Timo Spinde
Terry Ruas
Juhi Kulshrestha
Bela Gipp
AI4CE
130
22
0
22 May 2022
Towards A Reliable Ground-Truth For Biased Language Detection
Towards A Reliable Ground-Truth For Biased Language Detection
Timo Spinde
David Krieger
Manuel Plank
Bela Gipp
106
18
0
14 Dec 2021
Do You Think It's Biased? How To Ask For The Perception Of Media Bias
Do You Think It's Biased? How To Ask For The Perception Of Media Bias
Timo Spinde
Christina Kreuter
W. Gaissmaier
Felix Hamborg
Bela Gipp
H. Giese
120
24
0
14 Dec 2021
Do Datasets Have Politics? Disciplinary Values in Computer Vision
  Dataset Development
Do Datasets Have Politics? Disciplinary Values in Computer Vision Dataset Development
M. Scheuerman
Emily L. Denton
A. Hanna
168
234
0
09 Aug 2021
Towards Understanding and Mitigating Social Biases in Language Models
Towards Understanding and Mitigating Social Biases in Language Models
Paul Pu Liang
Chiyu Wu
Louis-Philippe Morency
Ruslan Salakhutdinov
155
435
0
24 Jun 2021
MBIC -- A Media Bias Annotation Dataset Including Annotator
  Characteristics
MBIC -- A Media Bias Annotation Dataset Including Annotator Characteristics
Timo Spinde
L. Rudnitckaia
K. Sinha
Felix Hamborg
Bela Gipp
K. Donnay
100
50
0
20 May 2021
What Makes Good In-Context Examples for GPT-$3$?
What Makes Good In-Context Examples for GPT-333?Workshop on Knowledge Extraction and Integration for Deep Learning Architectures; Deep Learning Inside Out (DEELIO), 2021
Jiachang Liu
Dinghan Shen
Yizhe Zhang
Bill Dolan
Lawrence Carin
Weizhu Chen
AAMLRALM
498
1,524
0
17 Jan 2021
Beyond Accuracy: Behavioral Testing of NLP models with CheckList
Beyond Accuracy: Behavioral Testing of NLP models with CheckList
Marco Tulio Ribeiro
Tongshuang Wu
Carlos Guestrin
Sameer Singh
ELM
465
1,183
0
08 May 2020
Exploring the Limits of Transfer Learning with a Unified Text-to-Text
  Transformer
Exploring the Limits of Transfer Learning with a Unified Text-to-Text TransformerJournal of machine learning research (JMLR), 2019
Colin Raffel
Noam M. Shazeer
Adam Roberts
Katherine Lee
Sharan Narang
Michael Matena
Yanqi Zhou
Wei Li
Peter J. Liu
AIMat
1.3K
22,270
0
23 Oct 2019
In Plain Sight: Media Bias Through the Lens of Factual Reporting
In Plain Sight: Media Bias Through the Lens of Factual ReportingConference on Empirical Methods in Natural Language Processing (EMNLP), 2019
Lisa Fan
M. White
Eva Sharma
Ruisi Su
Prafulla Kumar Choubey
Ruihong Huang
Lu Wang
93
115
0
05 Sep 2019
1