ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.14591
  4. Cited By
Reasons to Reject? Aligning Language Models with Judgments

Reasons to Reject? Aligning Language Models with Judgments

22 December 2023
Weiwen Xu
Deng Cai
Zhisong Zhang
Wai Lam
Shuming Shi
    ALM
ArXivPDFHTML

Papers citing "Reasons to Reject? Aligning Language Models with Judgments"

18 / 18 papers shown
Title
A Survey on Progress in LLM Alignment from the Perspective of Reward Design
A Survey on Progress in LLM Alignment from the Perspective of Reward Design
Miaomiao Ji
Yanqiu Wu
Zhibin Wu
Shoujin Wang
Jian Yang
Mark Dras
Usman Naseem
31
0
0
05 May 2025
Natural Language Fine-Tuning
Natural Language Fine-Tuning
J. Liu
Yue Wang
Zhiqi Lin
Min Chen
Yixue Hao
Long Hu
21
1
0
31 Dec 2024
FaGeL: Fabric LLMs Agent empowered Embodied Intelligence Evolution with
  Autonomous Human-Machine Collaboration
FaGeL: Fabric LLMs Agent empowered Embodied Intelligence Evolution with Autonomous Human-Machine Collaboration
Jia Liu
Min Chen
LM&Ro
AI4CE
32
2
0
28 Dec 2024
Reasoning Paths Optimization: Learning to Reason and Explore From
  Diverse Paths
Reasoning Paths Optimization: Learning to Reason and Explore From Diverse Paths
Yew Ken Chia
Guizhen Chen
Weiwen Xu
Luu Anh Tuan
Soujanya Poria
Lidong Bing
LRM
15
0
0
07 Oct 2024
UICrit: Enhancing Automated Design Evaluation with a UICritique Dataset
UICrit: Enhancing Automated Design Evaluation with a UICritique Dataset
Peitong Duan
Chin-yi Chen
Gang Li
Bjoern Hartmann
Yang Li
40
7
0
11 Jul 2024
On the Transformations across Reward Model, Parameter Update, and
  In-Context Prompt
On the Transformations across Reward Model, Parameter Update, and In-Context Prompt
Deng Cai
Huayang Li
Tingchen Fu
Siheng Li
Weiwen Xu
...
Leyang Cui
Yan Wang
Lemao Liu
Taro Watanabe
Shuming Shi
KELM
26
2
0
24 Jun 2024
MACAROON: Training Vision-Language Models To Be Your Engaged Partners
MACAROON: Training Vision-Language Models To Be Your Engaged Partners
Shujin Wu
Yi Ren Fung
Sha Li
Yixin Wan
Kai-Wei Chang
Heng Ji
31
5
0
20 Jun 2024
A Survey on Human Preference Learning for Large Language Models
A Survey on Human Preference Learning for Large Language Models
Ruili Jiang
Kehai Chen
Xuefeng Bai
Zhixuan He
Juntao Li
Muyun Yang
Tiejun Zhao
Liqiang Nie
Min Zhang
39
8
0
17 Jun 2024
RaFe: Ranking Feedback Improves Query Rewriting for RAG
RaFe: Ranking Feedback Improves Query Rewriting for RAG
Shengyu Mao
Yong-jia Jiang
Boli Chen
Xiao Li
Peng Wang
Xinyu Wang
Pengjun Xie
Fei Huang
Huajun Chen
Ningyu Zhang
RALM
26
6
0
23 May 2024
CriticBench: Evaluating Large Language Models as Critic
CriticBench: Evaluating Large Language Models as Critic
Tian Lan
Wenwei Zhang
Chen Xu
Heyan Huang
Dahua Lin
Kai-xiang Chen
Xian-Ling Mao
ELM
AI4MH
LRM
31
1
0
21 Feb 2024
EventRL: Enhancing Event Extraction with Outcome Supervision for Large
  Language Models
EventRL: Enhancing Event Extraction with Outcome Supervision for Large Language Models
Jun Gao
Huan Zhao
Wei Wang
Changlong Yu
Ruifeng Xu
OffRL
11
4
0
18 Feb 2024
ICDPO: Effectively Borrowing Alignment Capability of Others via
  In-context Direct Preference Optimization
ICDPO: Effectively Borrowing Alignment Capability of Others via In-context Direct Preference Optimization
Feifan Song
Yuxuan Fan
Xin Zhang
Peiyi Wang
Houfeng Wang
25
8
0
14 Feb 2024
Stabilizing RLHF through Advantage Model and Selective Rehearsal
Stabilizing RLHF through Advantage Model and Selective Rehearsal
Baolin Peng
Linfeng Song
Ye Tian
Lifeng Jin
Haitao Mi
Dong Yu
23
17
0
18 Sep 2023
Improving alignment of dialogue agents via targeted human judgements
Improving alignment of dialogue agents via targeted human judgements
Amelia Glaese
Nat McAleese
Maja Trkebacz
John Aslanides
Vlad Firoiu
...
John F. J. Mellor
Demis Hassabis
Koray Kavukcuoglu
Lisa Anne Hendricks
G. Irving
ALM
AAML
225
495
0
28 Sep 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
Fine-Tuning Language Models from Human Preferences
Fine-Tuning Language Models from Human Preferences
Daniel M. Ziegler
Nisan Stiennon
Jeff Wu
Tom B. Brown
Alec Radford
Dario Amodei
Paul Christiano
G. Irving
ALM
273
1,561
0
18 Sep 2019
Dialogue Learning With Human-In-The-Loop
Dialogue Learning With Human-In-The-Loop
Jiwei Li
Alexander H. Miller
S. Chopra
MarcÁurelio Ranzato
Jason Weston
OffRL
210
132
0
29 Nov 2016
Google's Neural Machine Translation System: Bridging the Gap between
  Human and Machine Translation
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu
M. Schuster
Z. Chen
Quoc V. Le
Mohammad Norouzi
...
Alex Rudnick
Oriol Vinyals
G. Corrado
Macduff Hughes
J. Dean
AIMat
716
6,435
0
26 Sep 2016
1