Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2404.00978
Cited By
Prior Constraints-based Reward Model Training for Aligning Large Language Models
1 April 2024
Hang Zhou
Chenglong Wang
Yimin Hu
Tong Xiao
Chunliang Zhang
Jingbo Zhu
ALM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Prior Constraints-based Reward Model Training for Aligning Large Language Models"
3 / 3 papers shown
Title
Direct Preference Optimization with an Offset
Afra Amini
Tim Vieira
Ryan Cotterell
68
54
0
16 Feb 2024
Panacea: Pareto Alignment via Preference Adaptation for LLMs
Yifan Zhong
Chengdong Ma
Xiaoyuan Zhang
Ziran Yang
Haojun Chen
Qingfu Zhang
Siyuan Qi
Yaodong Yang
41
30
0
03 Feb 2024
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
1