Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2403.16950
Cited By
Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators
20 January 2025
Yinhong Liu
Han Zhou
Zhijiang Guo
Ehsan Shareghi
Ivan Vulić
Anna Korhonen
Nigel Collier
ALM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators"
3 / 53 papers shown
Title
Can Large Language Models Be an Alternative to Human Evaluations?
Cheng-Han Chiang
Hung-yi Lee
ALM
LM&MA
206
559
0
03 May 2023
Large Language Models are Zero-Shot Reasoners
Takeshi Kojima
S. Gu
Machel Reid
Yutaka Matsuo
Yusuke Iwasawa
ReLM
LRM
291
2,712
0
24 May 2022
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
Previous
1
2