ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.01299
  4. Cited By
The Effectiveness of LLMs as Annotators: A Comparative Overview and
  Empirical Analysis of Direct Representation

The Effectiveness of LLMs as Annotators: A Comparative Overview and Empirical Analysis of Direct Representation

2 May 2024
Maja Pavlovic
Massimo Poesio
ArXivPDFHTML

Papers citing "The Effectiveness of LLMs as Annotators: A Comparative Overview and Empirical Analysis of Direct Representation"

8 / 8 papers shown
Title
M-MAD: Multidimensional Multi-Agent Debate for Advanced Machine Translation Evaluation
M-MAD: Multidimensional Multi-Agent Debate for Advanced Machine Translation Evaluation
Zhaopeng Feng
Jiayuan Su
Jiamei Zheng
Jiahan Ren
Yan Zhang
Jian Wu
Hongwei Wang
Zuozhu Liu
ELM
198
0
0
21 Feb 2025
Can LLMs Rank the Harmfulness of Smaller LLMs? We are Not There Yet
Can LLMs Rank the Harmfulness of Smaller LLMs? We are Not There Yet
Berk Atil
Vipul Gupta
Sarkar Snigdha Sarathi Das
R. Passonneau
71
0
0
07 Feb 2025
MBIAS: Mitigating Bias in Large Language Models While Retaining Context
MBIAS: Mitigating Bias in Large Language Models While Retaining Context
Shaina Raza
Ananya Raval
Veronica Chatrath
34
6
0
18 May 2024
SemEval-2023 Task 11: Learning With Disagreements (LeWiDi)
SemEval-2023 Task 11: Learning With Disagreements (LeWiDi)
Elisa Leonardelli
Alexandra Uma
Gavin Abercrombie
Dina Almanea
Valerio Basile
Tommaso Fornaciari
Barbara Plank
Verena Rieser
Massimo Poesio
37
41
0
28 Apr 2023
Stop Measuring Calibration When Humans Disagree
Stop Measuring Calibration When Humans Disagree
Joris Baan
Wilker Aziz
Barbara Plank
Raquel Fernández
16
53
0
28 Oct 2022
Large Language Models are Zero-Shot Reasoners
Large Language Models are Zero-Shot Reasoners
Takeshi Kojima
S. Gu
Machel Reid
Yutaka Matsuo
Yusuke Iwasawa
ReLM
LRM
291
2,712
0
24 May 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
Agreeing to Disagree: Annotating Offensive Language Datasets with
  Annotators' Disagreement
Agreeing to Disagree: Annotating Offensive Language Datasets with Annotators' Disagreement
Elisa Leonardelli
Stefano Menini
Alessio Palmero Aprosio
Marco Guerini
Sara Tonelli
36
97
0
28 Sep 2021
1