Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2406.06369
Cited By
Annotation alignment: Comparing LLM and human annotations of conversational safety
10 June 2024
Rajiv Movva
Pang Wei Koh
Emma Pierson
ALM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Annotation alignment: Comparing LLM and human annotations of conversational safety"
5 / 5 papers shown
Title
A Short Survey on Small Reasoning Models: Training, Inference, Applications and Research Directions
Chengyu Wang
Taolin Zhang
Richang Hong
Jun Huang
ReLM
LRM
37
1
0
12 Apr 2025
Better Aligned with Survey Respondents or Training Data? Unveiling Political Leanings of LLMs on U.S. Supreme Court Cases
Shanshan Xu
T. Y. S. S. Santosh
Yanai Elazar
Quirin Vogel
Barbara Plank
Matthias Grabmair
AILaw
78
0
0
25 Feb 2025
Sociodemographic Prompting is Not Yet an Effective Approach for Simulating Subjective Judgments with LLMs
Huaman Sun
Jiaxin Pei
Minje Choi
David Jurgens
69
16
0
16 Nov 2023
Can Large Language Models Be an Alternative to Human Evaluations?
Cheng-Han Chiang
Hung-yi Lee
ALM
LM&MA
206
559
0
03 May 2023
With Little Power Comes Great Responsibility
Dallas Card
Peter Henderson
Urvashi Khandelwal
Robin Jia
Kyle Mahowald
Dan Jurafsky
225
115
0
13 Oct 2020
1