ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.08491
  4. Cited By
Prometheus: Inducing Fine-grained Evaluation Capability in Language
  Models

Prometheus: Inducing Fine-grained Evaluation Capability in Language Models

12 October 2023
Seungone Kim
Jamin Shin
Yejin Cho
Joel Jang
Shayne Longpre
Hwaran Lee
Sangdoo Yun
Seongjin Shin
Sungdong Kim
James Thorne
Minjoon Seo
    ALM
    LM&MA
    ELM
ArXivPDFHTML

Papers citing "Prometheus: Inducing Fine-grained Evaluation Capability in Language Models"

50 / 168 papers shown
Title
Meta-Rewarding Language Models: Self-Improving Alignment with
  LLM-as-a-Meta-Judge
Meta-Rewarding Language Models: Self-Improving Alignment with LLM-as-a-Meta-Judge
Tianhao Wu
Weizhe Yuan
O. Yu. Golovneva
Jing Xu
Yuandong Tian
Jiantao Jiao
Jason Weston
Sainbayar Sukhbaatar
ALM
KELM
LRM
44
72
0
28 Jul 2024
Can Language Models Evaluate Human Written Text? Case Study on Korean
  Student Writing for Education
Can Language Models Evaluate Human Written Text? Case Study on Korean Student Writing for Education
Seungyoon Kim
Seungone Kim
AI4Ed
23
0
0
24 Jul 2024
AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?
AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?
Ori Yoran
S. Amouyal
Chaitanya Malaviya
Ben Bogin
Ofir Press
Jonathan Berant
LLMAG
35
31
0
22 Jul 2024
Improving Context-Aware Preference Modeling for Language Models
Improving Context-Aware Preference Modeling for Language Models
Silviu Pitis
Ziang Xiao
Nicolas Le Roux
Alessandro Sordoni
29
8
0
20 Jul 2024
CLAVE: An Adaptive Framework for Evaluating Values of LLM Generated
  Responses
CLAVE: An Adaptive Framework for Evaluating Values of LLM Generated Responses
Jing Yao
Xiaoyuan Yi
Xing Xie
ELM
ALM
31
7
0
15 Jul 2024
Lynx: An Open Source Hallucination Evaluation Model
Lynx: An Open Source Hallucination Evaluation Model
Selvan Sunitha Ravi
B. Mielczarek
Anand Kannappan
Douwe Kiela
Rebecca Qian
VLM
RALM
HILM
46
17
0
11 Jul 2024
OffsetBias: Leveraging Debiased Data for Tuning Evaluators
OffsetBias: Leveraging Debiased Data for Tuning Evaluators
Junsoo Park
Seungyeon Jwa
Meiying Ren
Daeyoung Kim
Sanghyuk Choi
ALM
29
30
0
09 Jul 2024
Towards Optimizing and Evaluating a Retrieval Augmented QA Chatbot using
  LLMs with Human in the Loop
Towards Optimizing and Evaluating a Retrieval Augmented QA Chatbot using LLMs with Human in the Loop
Anum Afzal
Alexander Kowsik
Rajna Fani
Florian Matthes
50
6
0
08 Jul 2024
Large Language Model as an Assignment Evaluator: Insights, Feedback, and
  Challenges in a 1000+ Student Course
Large Language Model as an Assignment Evaluator: Insights, Feedback, and Challenges in a 1000+ Student Course
Cheng-Han Chiang
Wei-Chih Chen
Chun-Yi Kuan
Chienchou Yang
Hung-yi Lee
ELM
AI4Ed
35
5
0
07 Jul 2024
Evaluating Language Models for Generating and Judging Programming
  Feedback
Evaluating Language Models for Generating and Judging Programming Feedback
Charles Koutcheme
Nicola Dainese
Arto Hellas
Sami Sarsa
Juho Leinonen
Syed Ashraf
Paul Denny
ELM
21
2
0
05 Jul 2024
Human-Centered Design Recommendations for LLM-as-a-Judge
Human-Centered Design Recommendations for LLM-as-a-Judge
Qian Pan
Zahra Ashktorab
Michael Desmond
Martin Santillan Cooper
James M. Johnson
Rahul Nair
Elizabeth M. Daly
Werner Geyer
ELM
ALM
29
17
0
03 Jul 2024
UniGen: A Unified Framework for Textual Dataset Generation Using Large
  Language Models
UniGen: A Unified Framework for Textual Dataset Generation Using Large Language Models
Siyuan Wu
Yue Huang
Chujie Gao
Dongping Chen
Qihui Zhang
...
Tianyi Zhou
Xiangliang Zhang
Jianfeng Gao
Chaowei Xiao
Lichao Sun
SyDa
33
22
0
27 Jun 2024
Themis: Towards Flexible and Interpretable NLG Evaluation
Themis: Towards Flexible and Interpretable NLG Evaluation
Xinyu Hu
Li Lin
Mingqi Gao
Xunjian Yin
Xiaojun Wan
ELM
29
6
0
26 Jun 2024
Finding Blind Spots in Evaluator LLMs with Interpretable Checklists
Finding Blind Spots in Evaluator LLMs with Interpretable Checklists
Sumanth Doddapaneni
Mohammed Safi Ur Rahman Khan
Sshubam Verma
Mitesh Khapra
34
11
0
19 Jun 2024
Interpretable Preferences via Multi-Objective Reward Modeling and
  Mixture-of-Experts
Interpretable Preferences via Multi-Objective Reward Modeling and Mixture-of-Experts
Haoxiang Wang
Wei Xiong
Tengyang Xie
Han Zhao
Tong Zhang
46
132
0
18 Jun 2024
Unveiling Implicit Table Knowledge with Question-Then-Pinpoint Reasoner
  for Insightful Table Summarization
Unveiling Implicit Table Knowledge with Question-Then-Pinpoint Reasoner for Insightful Table Summarization
Kwangwook Seo
Jinyoung Yeo
Dongha Lee
ReLM
LMTD
LRM
24
1
0
18 Jun 2024
On LLMs-Driven Synthetic Data Generation, Curation, and Evaluation: A
  Survey
On LLMs-Driven Synthetic Data Generation, Curation, and Evaluation: A Survey
Lin Long
Rui Wang
Ruixuan Xiao
Junbo Zhao
Xiao Ding
Gang Chen
Haobo Wang
SyDa
51
90
0
14 Jun 2024
Merging Improves Self-Critique Against Jailbreak Attacks
Merging Improves Self-Critique Against Jailbreak Attacks
Victor Gallego
AAML
MoMe
36
3
0
11 Jun 2024
SciRIFF: A Resource to Enhance Language Model Instruction-Following over
  Scientific Literature
SciRIFF: A Resource to Enhance Language Model Instruction-Following over Scientific Literature
David Wadden
Kejian Shi
Jacob Morrison
Aakanksha Naik
Shruti Singh
...
Luca Soldaini
Shannon Zejiang Shen
Doug Downey
Hannaneh Hajishirzi
Arman Cohan
42
11
0
10 Jun 2024
The BiGGen Bench: A Principled Benchmark for Fine-grained Evaluation of Language Models with Language Models
The BiGGen Bench: A Principled Benchmark for Fine-grained Evaluation of Language Models with Language Models
Seungone Kim
Juyoung Suk
Ji Yong Cho
Shayne Longpre
Chaeeun Kim
...
Sean Welleck
Graham Neubig
Moontae Lee
Kyungjae Lee
Minjoon Seo
ELM
ALM
LM&MA
97
29
0
09 Jun 2024
A-Bench: Are LMMs Masters at Evaluating AI-generated Images?
A-Bench: Are LMMs Masters at Evaluating AI-generated Images?
Zicheng Zhang
H. Wu
Chunyi Li
Yingjie Zhou
Wei Sun
Xiongkuo Min
Zijian Chen
Xiaohong Liu
Weisi Lin
Guangtao Zhai
EGVM
59
15
0
05 Jun 2024
TimeChara: Evaluating Point-in-Time Character Hallucination of
  Role-Playing Large Language Models
TimeChara: Evaluating Point-in-Time Character Hallucination of Role-Playing Large Language Models
Jaewoo Ahn
Taehyun Lee
Junyoung Lim
Jin-Hwa Kim
Sangdoo Yun
Hwaran Lee
Gunhee Kim
LLMAG
HILM
35
12
0
28 May 2024
Aligning to Thousands of Preferences via System Message Generalization
Aligning to Thousands of Preferences via System Message Generalization
Seongyun Lee
Sue Hyun Park
Seungone Kim
Minjoon Seo
ALM
32
36
0
28 May 2024
Aya 23: Open Weight Releases to Further Multilingual Progress
Aya 23: Open Weight Releases to Further Multilingual Progress
Viraat Aryabumi
John Dang
Dwarak Talupuru
Saurabh Dash
David Cairuz
...
Aidan N. Gomez
Phil Blunsom
Marzieh Fadaee
A. Ustun
Sara Hooker
OSLM
47
73
0
23 May 2024
Fennec: Fine-grained Language Model Evaluation and Correction Extended
  through Branching and Bridging
Fennec: Fine-grained Language Model Evaluation and Correction Extended through Branching and Bridging
Xiaobo Liang
Haoke Zhang
Helan hu
Juntao Li
Jun Xu
Min Zhang
ALM
33
2
0
20 May 2024
FinTextQA: A Dataset for Long-form Financial Question Answering
FinTextQA: A Dataset for Long-form Financial Question Answering
Jian Chen
Peilin Zhou
Yining Hua
Yingxin Loh
Kehui Chen
Ziyuan Li
Bing Zhu
Junwei Liang
30
12
0
16 May 2024
DEBATE: Devil's Advocate-Based Assessment and Text Evaluation
DEBATE: Devil's Advocate-Based Assessment and Text Evaluation
Alex G. Kim
Keonwoo Kim
Sangwon Yoon
ELM
27
5
0
16 May 2024
Open Source Language Models Can Provide Feedback: Evaluating LLMs'
  Ability to Help Students Using GPT-4-As-A-Judge
Open Source Language Models Can Provide Feedback: Evaluating LLMs' Ability to Help Students Using GPT-4-As-A-Judge
Charles Koutcheme
Nicola Dainese
Sami Sarsa
Arto Hellas
Juho Leinonen
Paul Denny
ELM
ALM
29
22
0
08 May 2024
ContextQ: Generated Questions to Support Meaningful Parent-Child
  Dialogue While Co-Reading
ContextQ: Generated Questions to Support Meaningful Parent-Child Dialogue While Co-Reading
Griffin Dietz Smith
Siddhartha Prasad
Matt J. Davidson
Leah Findlater
R. Benjamin Shapiro
16
5
0
06 May 2024
Self-Improving Customer Review Response Generation Based on LLMs
Self-Improving Customer Review Response Generation Based on LLMs
Guy Azov
Tatiana Pelc
Adi Fledel Alon
Gila Kamhi
27
0
0
06 May 2024
Prometheus 2: An Open Source Language Model Specialized in Evaluating
  Other Language Models
Prometheus 2: An Open Source Language Model Specialized in Evaluating Other Language Models
Seungone Kim
Juyoung Suk
Shayne Longpre
Bill Yuchen Lin
Jamin Shin
Sean Welleck
Graham Neubig
Moontae Lee
Kyungjae Lee
Minjoon Seo
MoMe
ALM
ELM
49
163
0
02 May 2024
CEval: A Benchmark for Evaluating Counterfactual Text Generation
CEval: A Benchmark for Evaluating Counterfactual Text Generation
Van Bach Nguyen
Jorg Schlotterer
Christin Seifert
29
5
0
26 Apr 2024
METAL: Towards Multilingual Meta-Evaluation
METAL: Towards Multilingual Meta-Evaluation
Rishav Hada
Varun Gumma
Mohamed Ahmed
Kalika Bali
Sunayana Sitaram
ELM
30
2
0
02 Apr 2024
CheckEval: A reliable LLM-as-a-Judge framework for evaluating text generation using checklists
CheckEval: A reliable LLM-as-a-Judge framework for evaluating text generation using checklists
Yukyung Lee
Joonghoon Kim
Jaehee Kim
Hyowon Cho
Pilsung Kang
Pilsung Kang
Najoung Kim
ELM
42
4
0
27 Mar 2024
Reinforcement Learning from Reflective Feedback (RLRF): Aligning and
  Improving LLMs via Fine-Grained Self-Reflection
Reinforcement Learning from Reflective Feedback (RLRF): Aligning and Improving LLMs via Fine-Grained Self-Reflection
Kyungjae Lee
Dasol Hwang
Sunghyun Park
Youngsoo Jang
Moontae Lee
33
8
0
21 Mar 2024
RewardBench: Evaluating Reward Models for Language Modeling
RewardBench: Evaluating Reward Models for Language Modeling
Nathan Lambert
Valentina Pyatkin
Jacob Morrison
Lester James Validad Miranda
Bill Yuchen Lin
...
Sachin Kumar
Tom Zick
Yejin Choi
Noah A. Smith
Hanna Hajishirzi
ALM
74
211
0
20 Mar 2024
CodeUltraFeedback: An LLM-as-a-Judge Dataset for Aligning Large Language
  Models to Coding Preferences
CodeUltraFeedback: An LLM-as-a-Judge Dataset for Aligning Large Language Models to Coding Preferences
M. Weyssow
Aton Kamanda
H. Sahraoui
ALM
59
30
0
14 Mar 2024
Detectors for Safe and Reliable LLMs: Implementations, Uses, and
  Limitations
Detectors for Safe and Reliable LLMs: Implementations, Uses, and Limitations
Swapnaja Achintalwar
Adriana Alvarado Garcia
Ateret Anaby-Tavor
Ioana Baldini
Sara E. Berger
...
Aashka Trivedi
Kush R. Varshney
Dennis L. Wei
Shalisha Witherspooon
Marcel Zalmanovici
25
10
0
09 Mar 2024
FAC$^2$E: Better Understanding Large Language Model Capabilities by
  Dissociating Language and Cognition
FAC2^22E: Better Understanding Large Language Model Capabilities by Dissociating Language and Cognition
Xiaoqiang Wang
Bang Liu
Lingfei Wu
22
0
0
29 Feb 2024
Assisting in Writing Wikipedia-like Articles From Scratch with Large
  Language Models
Assisting in Writing Wikipedia-like Articles From Scratch with Large Language Models
Yijia Shao
Yucheng Jiang
Theodore A. Kanell
Peter Xu
Omar Khattab
Monica S. Lam
LLMAG
KELM
24
34
0
22 Feb 2024
Ranking Large Language Models without Ground Truth
Ranking Large Language Models without Ground Truth
Amit Dhurandhar
Rahul Nair
Moninder Singh
Elizabeth M. Daly
K. Ramamurthy
HILM
ALM
ELM
18
5
0
21 Feb 2024
Are LLM-based Evaluators Confusing NLG Quality Criteria?
Are LLM-based Evaluators Confusing NLG Quality Criteria?
Xinyu Hu
Mingqi Gao
Sen Hu
Yang Zhang
Yicheng Chen
Teng Xu
Xiaojun Wan
AAML
ELM
29
21
0
19 Feb 2024
Multi-Task Inference: Can Large Language Models Follow Multiple
  Instructions at Once?
Multi-Task Inference: Can Large Language Models Follow Multiple Instructions at Once?
Guijin Son
Sangwon Baek
Sangdae Nam
Ilgyun Jeong
Seungone Kim
ELM
LRM
21
13
0
18 Feb 2024
FactPICO: Factuality Evaluation for Plain Language Summarization of
  Medical Evidence
FactPICO: Factuality Evaluation for Plain Language Summarization of Medical Evidence
Sebastian Antony Joseph
Lily Chen
Jan Trienes
Hannah Louisa Göke
Monika Coers
Wei Xu
Byron C. Wallace
Junyi Jessy Li
LM&MA
HILM
16
10
0
18 Feb 2024
Aligning Large Language Models by On-Policy Self-Judgment
Aligning Large Language Models by On-Policy Self-Judgment
Sangkyu Lee
Sungdong Kim
Ashkan Yousefpour
Minjoon Seo
Kang Min Yoo
Youngjae Yu
OSLM
33
8
0
17 Feb 2024
DELL: Generating Reactions and Explanations for LLM-Based Misinformation
  Detection
DELL: Generating Reactions and Explanations for LLM-Based Misinformation Detection
Herun Wan
Shangbin Feng
Zhaoxuan Tan
Heng Wang
Yulia Tsvetkov
Minnan Luo
68
29
0
16 Feb 2024
The Generative AI Paradox on Evaluation: What It Can Solve, It May Not
  Evaluate
The Generative AI Paradox on Evaluation: What It Can Solve, It May Not Evaluate
Juhyun Oh
Eunsu Kim
Inha Cha
Alice H. Oh
ELM
28
7
0
09 Feb 2024
LLM-based NLG Evaluation: Current Status and Challenges
LLM-based NLG Evaluation: Current Status and Challenges
Mingqi Gao
Xinyu Hu
Jie Ruan
Xiao Pu
Xiaojun Wan
ELM
LM&MA
53
29
0
02 Feb 2024
What Does the Bot Say? Opportunities and Risks of Large Language Models
  in Social Media Bot Detection
What Does the Bot Say? Opportunities and Risks of Large Language Models in Social Media Bot Detection
Shangbin Feng
Herun Wan
Ningnan Wang
Zhaoxuan Tan
Minnan Luo
Yulia Tsvetkov
AAML
DeLMO
14
16
0
01 Feb 2024
Self-Rewarding Language Models
Self-Rewarding Language Models
Weizhe Yuan
Richard Yuanzhe Pang
Kyunghyun Cho
Xian Li
Sainbayar Sukhbaatar
Jing Xu
Jason Weston
ReLM
SyDa
ALM
LRM
235
294
0
18 Jan 2024
Previous
1234
Next