ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.03177
  4. Cited By
Modeling and mitigating human annotation errors to design efficient
  stream processing systems with human-in-the-loop machine learning
v1v2 (latest)

Modeling and mitigating human annotation errors to design efficient stream processing systems with human-in-the-loop machine learning

7 July 2020
Rahul Pandey
Hemant Purohit
Carlos Castillo
V. Shalin
ArXiv (abs)PDFHTML

Papers citing "Modeling and mitigating human annotation errors to design efficient stream processing systems with human-in-the-loop machine learning"

16 / 16 papers shown
From Ground Trust to Truth: Disparities in Offensive Language Judgments on Contemporary Korean Political Discourse
From Ground Trust to Truth: Disparities in Offensive Language Judgments on Contemporary Korean Political Discourse
Seunguk Yu
Jungmin Yun
Jinhee Jang
Youngbin Kim
193
1
0
18 Sep 2025
XLQA: A Benchmark for Locale-Aware Multilingual Open-Domain Question Answering
XLQA: A Benchmark for Locale-Aware Multilingual Open-Domain Question Answering
Keon-Woo Roh
Yeong-Joon Ju
Seong-Whan Lee
ELM
224
2
0
22 Aug 2025
Trust and Reputation in Data Sharing: A Survey
Trust and Reputation in Data Sharing: A Survey
Wenbo Wu
George Konstantinidis
136
5
0
19 Aug 2025
TransClean: Finding False Positives in Multi-Source Entity Matching under Real-World Conditions via Transitive Consistency
TransClean: Finding False Positives in Multi-Source Entity Matching under Real-World Conditions via Transitive Consistency
Fernando De Meer Pardo
Branka Hadji Misheva
Martin Braschler
Kurt Stockinger
226
0
0
04 Jun 2025
Prompting in the Dark: Assessing Human Performance in Prompt Engineering for Data Labeling When Gold Labels Are Absent
Prompting in the Dark: Assessing Human Performance in Prompt Engineering for Data Labeling When Gold Labels Are AbsentInternational Conference on Human Factors in Computing Systems (CHI), 2025
Zeyu He
Saniya Naphade
Ting-Hao 'Kenneth' Huang
371
2
0
16 Feb 2025
ORIS: Online Active Learning Using Reinforcement Learning-based
  Inclusive Sampling for Robust Streaming Analytics System
ORIS: Online Active Learning Using Reinforcement Learning-based Inclusive Sampling for Robust Streaming Analytics SystemBigData Congress [Services Society] (BSS), 2024
Rahul Pandey
Ziwei Zhu
Hemant Purohit
235
0
0
27 Nov 2024
The Oscars of AI Theater: A Survey on Role-Playing with Language Models
The Oscars of AI Theater: A Survey on Role-Playing with Language Models
Polydoros Giannouris
Yan Wang
Yang Deng
Jia Li
686
50
0
16 Jul 2024
Adversarial DPO: Harnessing Harmful Data for Reducing Toxicity with
  Minimal Impact on Coherence and Evasiveness in Dialogue Agents
Adversarial DPO: Harnessing Harmful Data for Reducing Toxicity with Minimal Impact on Coherence and Evasiveness in Dialogue Agents
San Kim
Gary Geunbae Lee
AAML
448
6
0
21 May 2024
Closing the Knowledge Gap in Designing Data Annotation Interfaces for
  AI-powered Disaster Management Analytic Systems
Closing the Knowledge Gap in Designing Data Annotation Interfaces for AI-powered Disaster Management Analytic Systems
Zinat Ara
Hossein Salemi
Sungsoo Ray Hong
Yasas Senarath
Steve Peterson
A. Hughes
Hemant Purohit
168
7
0
04 Mar 2024
Dissecting Human and LLM Preferences
Dissecting Human and LLM Preferences
Junlong Li
Fan Zhou
Shichao Sun
Yikai Zhang
Hai Zhao
Pengfei Liu
ALM
205
15
0
17 Feb 2024
Linear Alignment: A Closed-form Solution for Aligning Human Preferences
  without Tuning and Feedback
Linear Alignment: A Closed-form Solution for Aligning Human Preferences without Tuning and FeedbackInternational Conference on Machine Learning (ICML), 2024
Songyang Gao
Qiming Ge
Wei Shen
Jiajun Sun
Junjie Ye
...
Yicheng Zou
Zhi Chen
Hang Yan
Tao Gui
Dahua Lin
300
23
0
21 Jan 2024
Conditions on Preference Relations that Guarantee the Existence of
  Optimal Policies
Conditions on Preference Relations that Guarantee the Existence of Optimal PoliciesInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2023
Jonathan Colaco Carr
Prakash Panangaden
Doina Precup
398
4
0
03 Nov 2023
Towards Understanding Sycophancy in Language Models
Towards Understanding Sycophancy in Language Models
Mrinank Sharma
Meg Tong
Tomasz Korbak
David Duvenaud
Amanda Askell
...
Oliver Rausch
Nicholas Schiefer
Da Yan
Miranda Zhang
Ethan Perez
1.2K
657
0
20 Oct 2023
Compositional preference models for aligning LMs
Compositional preference models for aligning LMsInternational Conference on Learning Representations (ICLR), 2023
Dongyoung Go
Tomasz Korbak
Germán Kruszewski
Jos Rozen
Marc Dymetman
342
26
0
17 Oct 2023
Towards Reliable Dermatology Evaluation Benchmarks
Towards Reliable Dermatology Evaluation Benchmarks
Fabian Gröger
Simone Lionetti
Philippe Gottfrois
Alvaro Gonzalez-Jimenez
Matthew Groh
Roxana Daneshjou
Labelling Consortium
Alexander A. Navarini
Marc Pouly
231
7
0
13 Sep 2023
Open Problems and Fundamental Limitations of Reinforcement Learning from
  Human Feedback
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Stephen Casper
Xander Davies
Claudia Shi
T. Gilbert
Jérémy Scheurer
...
Erdem Biyik
Anca Dragan
David M. Krueger
Dorsa Sadigh
Dylan Hadfield-Menell
ALMOffRL
471
799
0
27 Jul 2023
1
Page 1 of 1