ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.06626
  4. Cited By
When the Majority is Wrong: Modeling Annotator Disagreement for
  Subjective Tasks

When the Majority is Wrong: Modeling Annotator Disagreement for Subjective Tasks

11 May 2023
Eve Fleisig
Rediet Abebe
Dan Klein
ArXivPDFHTML

Papers citing "When the Majority is Wrong: Modeling Annotator Disagreement for Subjective Tasks"

35 / 35 papers shown
Title
Capturing Individual Human Preferences with Reward Features
Capturing Individual Human Preferences with Reward Features
André Barreto
Vincent Dumoulin
Yiran Mao
Nicolas Perez-Nieves
Bobak Shahriari
Yann Dauphin
Doina Precup
Hugo Larochelle
ALM
62
1
0
21 Mar 2025
Urban Safety Perception Through the Lens of Large Multimodal Models: A Persona-based Approach
Ciro Beneduce
Bruno Lepri
Massimiliano Luca
30
0
0
01 Mar 2025
Beyond Demographics: Fine-tuning Large Language Models to Predict Individuals' Subjective Text Perceptions
Beyond Demographics: Fine-tuning Large Language Models to Predict Individuals' Subjective Text Perceptions
Matthias Orlikowski
Jiaxin Pei
Paul Röttger
Philipp Cimiano
David Jurgens
Dirk Hovy
54
1
0
28 Feb 2025
Game Theory Meets Large Language Models: A Systematic Survey
Game Theory Meets Large Language Models: A Systematic Survey
Haoran Sun
Yusen Wu
Yukun Cheng
Xu Chu
LM&MA
OffRL
AI4CE
55
1
0
13 Feb 2025
Correcting Annotator Bias in Training Data: Population-Aligned Instance Replication (PAIR)
Correcting Annotator Bias in Training Data: Population-Aligned Instance Replication (PAIR)
Stephanie Eckman
Bolei Ma
Christoph Kern
Rob Chew
Barbara Plank
Frauke Kreuter
41
0
0
12 Jan 2025
Beyond Dataset Creation: Critical View of Annotation Variation and Bias
  Probing of a Dataset for Online Radical Content Detection
Beyond Dataset Creation: Critical View of Annotation Variation and Bias Probing of a Dataset for Online Radical Content Detection
Arij Riabi
Virginie Mouilleron
Menel Mahamdi
Wissam Antoun
Djamé Seddah
62
0
0
16 Dec 2024
Venire: A Machine Learning-Guided Panel Review System for Community
  Content Moderation
Venire: A Machine Learning-Guided Panel Review System for Community Content Moderation
Vinay Koshy
Frederick Choi
Yi-Shyuan Chiang
Hari Sundaram
Eshwar Chandrasekharan
Karrie Karahalios
24
2
0
30 Oct 2024
Reducing annotator bias by belief elicitation
Reducing annotator bias by belief elicitation
Terne Sasha Thorn Jakobsen
Andreas Bjerre-Nielsen
Robert Böhm
39
0
0
21 Oct 2024
Ethics Whitepaper: Whitepaper on Ethical Research into Large Language
  Models
Ethics Whitepaper: Whitepaper on Ethical Research into Large Language Models
Eddie L. Ungless
Nikolas Vitsakis
Zeerak Talat
James Garforth
Bjorn Ross
Arno Onken
Atoosa Kasirzadeh
Alexandra Birch
28
1
0
17 Oct 2024
Modeling Future Conversation Turns to Teach LLMs to Ask Clarifying Questions
Modeling Future Conversation Turns to Teach LLMs to Ask Clarifying Questions
Michael J.Q. Zhang
W. Bradley Knox
Eunsol Choi
48
3
0
17 Oct 2024
Accurate and Data-Efficient Toxicity Prediction when Annotators Disagree
Accurate and Data-Efficient Toxicity Prediction when Annotators Disagree
Harbani Jaggi
Kashyap Murali
Eve Fleisig
Erdem Bıyık
24
1
0
16 Oct 2024
Intuitions of Compromise: Utilitarianism vs. Contractualism
Intuitions of Compromise: Utilitarianism vs. Contractualism
Jared Moore
Yejin Choi
Sydney Levine
31
0
0
07 Oct 2024
Re-examining Sexism and Misogyny Classification with Annotator Attitudes
Re-examining Sexism and Misogyny Classification with Annotator Attitudes
Aiqi Jiang
Nikolas Vitsakis
Tanvi Dinkar
Gavin Abercrombie
Ioannis Konstas
37
1
0
04 Oct 2024
ConsistencyTrack: A Robust Multi-Object Tracker with a Generation
  Strategy of Consistency Model
ConsistencyTrack: A Robust Multi-Object Tracker with a Generation Strategy of Consistency Model
Lifan Jiang
Zhihui Wang
Siqi Yin
Guangxiao Ma
Peng Zhang
Boxi Wu
DiffM
51
0
0
28 Aug 2024
Can Unconfident LLM Annotations Be Used for Confident Conclusions?
Can Unconfident LLM Annotations Be Used for Confident Conclusions?
Kristina Gligorić
Tijana Zrnic
Cinoo Lee
Emmanuel J. Candès
Dan Jurafsky
66
5
0
27 Aug 2024
Personalizing Reinforcement Learning from Human Feedback with
  Variational Preference Learning
Personalizing Reinforcement Learning from Human Feedback with Variational Preference Learning
S. Poddar
Yanming Wan
Hamish Ivison
Abhishek Gupta
Natasha Jaques
32
35
0
19 Aug 2024
Improving Context-Aware Preference Modeling for Language Models
Improving Context-Aware Preference Modeling for Language Models
Silviu Pitis
Ziang Xiao
Nicolas Le Roux
Alessandro Sordoni
31
8
0
20 Jul 2024
Voices in a Crowd: Searching for Clusters of Unique Perspectives
Voices in a Crowd: Searching for Clusters of Unique Perspectives
Nikolas Vitsakis
Amit Parekh
Ioannis Konstas
36
0
0
19 Jul 2024
Let Guidelines Guide You: A Prescriptive Guideline-Centered Data
  Annotation Methodology
Let Guidelines Guide You: A Prescriptive Guideline-Centered Data Annotation Methodology
Federico Ruggeri
Eleonora Misino
Arianna Muti
Katerina Korre
Paolo Torroni
Alberto Barrón-Cedeño
30
0
0
20 Jun 2024
Whose Preferences? Differences in Fairness Preferences and Their Impact
  on the Fairness of AI Utilizing Human Feedback
Whose Preferences? Differences in Fairness Preferences and Their Impact on the Fairness of AI Utilizing Human Feedback
Emilia Agis Lerner
Florian E. Dorner
Elliott Ash
Naman Goel
34
1
0
09 Jun 2024
The Perspectivist Paradigm Shift: Assumptions and Challenges of
  Capturing Human Labels
The Perspectivist Paradigm Shift: Assumptions and Challenges of Capturing Human Labels
Eve Fleisig
Su Lin Blodgett
Dan Klein
Zeerak Talat
25
13
0
09 May 2024
Beyond Accuracy: Investigating Error Types in GPT-4 Responses to USMLE
  Questions
Beyond Accuracy: Investigating Error Types in GPT-4 Responses to USMLE Questions
Soumyadeep Roy
A. Khatua
Fatemeh Ghoochani
Uwe Hadler
Wolfgang Nejdl
Niloy Ganguly
ELM
LM&MA
33
8
0
20 Apr 2024
Cross-cultural Inspiration Detection and Analysis in Real and
  LLM-generated Social Media Data
Cross-cultural Inspiration Detection and Analysis in Real and LLM-generated Social Media Data
Oana Ignat
Gayathri Ganesh Lakshmy
Rada Mihalcea
DeLMO
24
1
0
19 Apr 2024
AdvisorQA: Towards Helpful and Harmless Advice-seeking Question Answering with Collective Intelligence
AdvisorQA: Towards Helpful and Harmless Advice-seeking Question Answering with Collective Intelligence
Minbeom Kim
Hwanhee Lee
Joonsuk Park
Hwaran Lee
Kyomin Jung
32
1
0
18 Apr 2024
Corpus Considerations for Annotator Modeling and Scaling
Corpus Considerations for Annotator Modeling and Scaling
O. O. Sarumi
Béla Neuendorf
Joan Plepi
Lucie Flek
Jorg Schlotterer
Charles F Welch
25
1
0
02 Apr 2024
Fairness in Large Language Models: A Taxonomic Survey
Fairness in Large Language Models: A Taxonomic Survey
Zhibo Chu
Zichong Wang
Wenbin Zhang
AILaw
43
32
0
31 Mar 2024
Subjective $\textit{Isms}$? On the Danger of Conflating Hate and Offence
  in Abusive Language Detection
Subjective Isms\textit{Isms}Isms? On the Danger of Conflating Hate and Offence in Abusive Language Detection
A. C. Curry
Gavin Abercrombie
Zeerak Talat
14
8
0
04 Mar 2024
Quantifying the Persona Effect in LLM Simulations
Quantifying the Persona Effect in LLM Simulations
Tiancheng Hu
Nigel Collier
22
51
0
16 Feb 2024
A Roadmap to Pluralistic Alignment
A Roadmap to Pluralistic Alignment
Taylor Sorensen
Jared Moore
Jillian R. Fisher
Mitchell L. Gordon
Niloofar Mireshghallah
...
Liwei Jiang
Ximing Lu
Nouha Dziri
Tim Althoff
Yejin Choi
65
80
0
07 Feb 2024
Personalized Language Modeling from Personalized Human Feedback
Personalized Language Modeling from Personalized Human Feedback
Xinyu Li
Zachary C. Lipton
Liu Leqi
ALM
63
47
0
06 Feb 2024
Distributional Preference Learning: Understanding and Accounting for
  Hidden Context in RLHF
Distributional Preference Learning: Understanding and Accounting for Hidden Context in RLHF
Anand Siththaranjan
Cassidy Laidlaw
Dylan Hadfield-Menell
24
55
0
13 Dec 2023
Disentangling Perceptions of Offensiveness: Cultural and Moral
  Correlates
Disentangling Perceptions of Offensiveness: Cultural and Moral Correlates
Aida Mostafazadeh Davani
Mark Díaz
Dylan K. Baker
Vinodkumar Prabhakaran
AAML
21
14
0
11 Dec 2023
First Tragedy, then Parse: History Repeats Itself in the New Era of
  Large Language Models
First Tragedy, then Parse: History Repeats Itself in the New Era of Large Language Models
Naomi Saphra
Eve Fleisig
Kyunghyun Cho
Adam Lopez
LRM
17
8
0
08 Nov 2023
A Taxonomy of Rater Disagreements: Surveying Challenges & Opportunities
  from the Perspective of Annotating Online Toxicity
A Taxonomy of Rater Disagreements: Surveying Challenges & Opportunities from the Perspective of Annotating Online Toxicity
Wenbo Zhang
Hangzhi Guo
Ian D Kivlichan
Vinodkumar Prabhakaran
Davis Yadav
Amulya Yadav
23
2
0
07 Nov 2023
Sensitivity, Performance, Robustness: Deconstructing the Effect of
  Sociodemographic Prompting
Sensitivity, Performance, Robustness: Deconstructing the Effect of Sociodemographic Prompting
Tilman Beck
Hendrik Schuff
Anne Lauscher
Iryna Gurevych
22
32
0
13 Sep 2023
1