Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2308.08998
Cited By
Reinforced Self-Training (ReST) for Language Modeling
17 August 2023
Çağlar Gülçehre
T. Paine
S. Srinivasan
Ksenia Konyushkova
L. Weerts
Abhishek Sharma
Aditya Siddhant
Alexa Ahern
Miaosen Wang
Chenjie Gu
Wolfgang Macherey
Arnaud Doucet
Orhan Firat
Nando de Freitas
OffRL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Reinforced Self-Training (ReST) for Language Modeling"
50 / 225 papers shown
Title
DMoERM: Recipes of Mixture-of-Experts for Effective Reward Modeling
Shanghaoran Quan
MoE
OffRL
43
7
0
02 Mar 2024
ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL
Yifei Zhou
Andrea Zanette
Jiayi Pan
Sergey Levine
Aviral Kumar
65
47
0
29 Feb 2024
Arithmetic Control of LLMs for Diverse User Preferences: Directional Preference Alignment with Multi-Objective Rewards
Haoxiang Wang
Yong Lin
Wei Xiong
Rui Yang
Shizhe Diao
Shuang Qiu
Han Zhao
Tong Zhang
40
70
0
28 Feb 2024
Learning to Generate Instruction Tuning Datasets for Zero-Shot Task Adaptation
Nihal V. Nayak
Yiyang Nan
Avi Trost
Stephen H. Bach
SyDa
30
13
0
28 Feb 2024
SoFA: Shielded On-the-fly Alignment via Priority Rule Following
Xinyu Lu
Bowen Yu
Yaojie Lu
Hongyu Lin
Haiyang Yu
Le Sun
Xianpei Han
Yongbin Li
55
13
0
27 Feb 2024
Q-Probe: A Lightweight Approach to Reward Maximization for Language Models
Kenneth Li
Samy Jelassi
Hugh Zhang
Sham Kakade
Martin Wattenberg
David Brandfonbrener
27
9
0
22 Feb 2024
Large Language Models for Data Annotation: A Survey
Zhen Tan
Dawei Li
Song Wang
Alimohammad Beigi
Bohan Jiang
Amrita Bhattacharjee
Mansooreh Karami
Jundong Li
Lu Cheng
Huan Liu
SyDa
44
49
0
21 Feb 2024
A Survey on Knowledge Distillation of Large Language Models
Xiaohan Xu
Ming Li
Chongyang Tao
Tao Shen
Reynold Cheng
Jinyang Li
Can Xu
Dacheng Tao
Tianyi Zhou
KELM
VLM
42
100
0
20 Feb 2024
Enabling Weak LLMs to Judge Response Reliability via Meta Ranking
Zijun Liu
Boqun Kou
Peng Li
Ming Yan
Ji Zhang
Fei Huang
Yang Janet Liu
24
2
0
19 Feb 2024
Learning to Learn Faster from Human Feedback with Language Model Predictive Control
Jacky Liang
Fei Xia
Wenhao Yu
Andy Zeng
Montse Gonzalez Arenas
...
N. Heess
Kanishka Rao
Nik Stewart
Jie Tan
Carolina Parada
LM&Ro
54
32
0
18 Feb 2024
Aligning Large Language Models by On-Policy Self-Judgment
Sangkyu Lee
Sungdong Kim
Ashkan Yousefpour
Minjoon Seo
Kang Min Yoo
Youngjae Yu
OSLM
33
9
0
17 Feb 2024
I Learn Better If You Speak My Language: Understanding the Superior Performance of Fine-Tuning Large Language Models with LLM-Generated Responses
Xuan Ren
Biao Wu
Lingqiao Liu
25
5
0
17 Feb 2024
Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment
Rui Yang
Xiaoman Pan
Feng Luo
Shuang Qiu
Han Zhong
Dong Yu
Jianshu Chen
95
66
0
15 Feb 2024
Reward Generalization in RLHF: A Topological Perspective
Tianyi Qiu
Fanzhi Zeng
Jiaming Ji
Dong Yan
Kaile Wang
Jiayi Zhou
Yang Han
Josef Dai
Xuehai Pan
Yaodong Yang
AI4CE
25
3
0
15 Feb 2024
RS-DPO: A Hybrid Rejection Sampling and Direct Preference Optimization Method for Alignment of Large Language Models
Saeed Khaki
JinJin Li
Lan Ma
Liu Yang
Prathap Ramachandra
15
19
0
15 Feb 2024
Suppressing Pink Elephants with Direct Principle Feedback
Louis Castricato
Nathan Lile
Suraj Anand
Hailey Schoelkopf
Siddharth Verma
Stella Biderman
58
9
0
12 Feb 2024
ODIN: Disentangled Reward Mitigates Hacking in RLHF
Lichang Chen
Chen Zhu
Davit Soselia
Jiuhai Chen
Tianyi Zhou
Tom Goldstein
Heng-Chiao Huang
M. Shoeybi
Bryan Catanzaro
AAML
42
51
0
11 Feb 2024
Online Iterative Reinforcement Learning from Human Feedback with General Preference Model
Chen Ye
Wei Xiong
Yuheng Zhang
Nan Jiang
Tong Zhang
OffRL
38
9
0
11 Feb 2024
V-STaR: Training Verifiers for Self-Taught Reasoners
Arian Hosseini
Xingdi Yuan
Nikolay Malkin
Aaron C. Courville
Alessandro Sordoni
Rishabh Agarwal
ReLM
LRM
35
99
0
09 Feb 2024
Limitations of Agents Simulated by Predictive Models
Raymond Douglas
Jacek Karwowski
Chan Bae
Andis Draguns
Victoria Krakovna
19
0
0
08 Feb 2024
Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning
Zhiheng Xi
Wenxiang Chen
Boyang Hong
Senjie Jin
Rui Zheng
...
Xinbo Zhang
Peng Sun
Tao Gui
Qi Zhang
Xuanjing Huang
LRM
32
20
0
08 Feb 2024
CodeIt: Self-Improving Language Models with Prioritized Hindsight Replay
Natasha Butt
Blazej Manczak
Auke Wiggers
Corrado Rainone
David W. Zhang
Michaël Defferrard
Taco S. Cohen
ReLM
LRM
46
17
0
07 Feb 2024
Aligner: Efficient Alignment by Learning to Correct
Jiaming Ji
Boyuan Chen
Hantao Lou
Donghai Hong
Borong Zhang
Xuehai Pan
Juntao Dai
Tianyi Qiu
Yaodong Yang
29
28
0
04 Feb 2024
Distilling LLMs' Decomposition Abilities into Compact Language Models
Denis Tarasov
Kumar Shridhar
SyDa
OffRL
LRM
40
2
0
02 Feb 2024
YODA: Teacher-Student Progressive Learning for Language Models
Jianqiao Lu
Wanjun Zhong
Yufei Wang
Zhijiang Guo
Qi Zhu
...
Baojun Wang
Yasheng Wang
Lifeng Shang
Xin Jiang
Qun Liu
LRM
19
6
0
28 Jan 2024
Improving Machine Translation with Human Feedback: An Exploration of Quality Estimation as a Reward Model
Zhiwei He
Xing Wang
Wenxiang Jiao
Zhuosheng Zhang
Rui Wang
Shuming Shi
Zhaopeng Tu
ALM
29
24
0
23 Jan 2024
Linear Alignment: A Closed-form Solution for Aligning Human Preferences without Tuning and Feedback
Songyang Gao
Qiming Ge
Wei Shen
Shihan Dou
Junjie Ye
...
Yicheng Zou
Zhi Chen
Hang Yan
Qi Zhang
Dahua Lin
47
10
0
21 Jan 2024
Self-Rewarding Language Models
Weizhe Yuan
Richard Yuanzhe Pang
Kyunghyun Cho
Xian Li
Sainbayar Sukhbaatar
Jing Xu
Jason Weston
ReLM
SyDa
ALM
LRM
235
298
0
18 Jan 2024
ReFT: Reasoning with Reinforced Fine-Tuning
Trung Quoc Luong
Xinbo Zhang
Zhanming Jie
Peng Sun
Xiaoran Jin
Hang Li
OffRL
LRM
ReLM
32
79
0
17 Jan 2024
Don't Rank, Combine! Combining Machine Translation Hypotheses Using Quality Estimation
Giorgos Vernikos
Andrei Popescu-Belis
30
14
0
12 Jan 2024
Integrating Physician Diagnostic Logic into Large Language Models: Preference Learning from Process Feedback
Chengfeng Dou
Zhi Jin
Wenpin Jiao
Haiyan Zhao
Yongqiang Zhao
Zhenwei Tao
LM&MA
74
4
0
11 Jan 2024
AutoAct: Automatic Agent Learning from Scratch for QA via Self-Planning
Shuofei Qiao
Ningyu Zhang
Runnan Fang
Yujie Luo
Wangchunshu Zhou
Yuchen Eleanor Jiang
Chengfei Lv
Huajun Chen
LLMAG
33
32
0
10 Jan 2024
Bootstrapping LLM-based Task-Oriented Dialogue Agents via Self-Talk
Dennis Ulmer
Elman Mansimov
Kaixiang Lin
Justin Sun
Xibin Gao
Yi Zhang
LLMAG
27
27
0
10 Jan 2024
Agent Alignment in Evolving Social Norms
Shimin Li
Tianxiang Sun
Qinyuan Cheng
Xipeng Qiu
LLMAG
28
7
0
09 Jan 2024
Evaluating Language Model Agency through Negotiations
Tim R. Davidson
V. Veselovsky
Martin Josifoski
Maxime Peyrard
Antoine Bosselut
Michal Kosinski
Robert West
LLMAG
29
22
0
09 Jan 2024
Human-Instruction-Free LLM Self-Alignment with Limited Samples
Hongyi Guo
Yuanshun Yao
Wei Shen
Jiaheng Wei
Xiaoying Zhang
Zhaoran Wang
Yang Liu
93
20
0
06 Jan 2024
Uncertainty-Penalized Reinforcement Learning from Human Feedback with Diverse Reward LoRA Ensembles
Yuanzhao Zhai
Han Zhang
Yu Lei
Yue Yu
Kele Xu
Dawei Feng
Bo Ding
Huaimin Wang
AI4CE
66
32
0
30 Dec 2023
Some things are more CRINGE than others: Iterative Preference Optimization with the Pairwise Cringe Loss
Jing Xu
Andrew Lee
Sainbayar Sukhbaatar
Jason Weston
15
86
0
27 Dec 2023
Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-Constraint
Wei Xiong
Hanze Dong
Chen Ye
Ziqi Wang
Han Zhong
Heng Ji
Nan Jiang
Tong Zhang
OffRL
36
155
0
18 Dec 2023
ReST meets ReAct: Self-Improvement for Multi-Step Reasoning LLM Agent
Renat Aksitov
Sobhan Miryoosefi
Zong-xiao Li
Daliang Li
Sheila Babayan
...
Sushant Prakash
Pranesh Srinivasan
Manzil Zaheer
Felix X. Yu
Sanjiv Kumar
LRM
ReLM
LLMAG
KELM
23
45
0
15 Dec 2023
Helping or Herding? Reward Model Ensembles Mitigate but do not Eliminate Reward Hacking
Jacob Eisenstein
Chirag Nagpal
Alekh Agarwal
Ahmad Beirami
Alex DÁmour
...
Katherine Heller
Stephen R. Pfohl
Deepak Ramachandran
Peter Shaw
Jonathan Berant
24
82
0
14 Dec 2023
Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models
Avi Singh
John D. Co-Reyes
Rishabh Agarwal
Ankesh Anand
Piyush Patil
...
Yamini Bansal
Ethan Dyer
Behnam Neyshabur
Jascha Narain Sohl-Dickstein
Noah Fiedel
ALM
LRM
ReLM
SyDa
147
144
0
11 Dec 2023
Distilled Self-Critique of LLMs with Synthetic Data: a Bayesian Perspective
Víctor Gallego
13
4
0
04 Dec 2023
Diffusion Model Alignment Using Direct Preference Optimization
Bram Wallace
Meihua Dang
Rafael Rafailov
Linqi Zhou
Aaron Lou
Senthil Purushwalkam
Stefano Ermon
Caiming Xiong
Shafiq R. Joty
Nikhil Naik
EGVM
33
224
0
21 Nov 2023
Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2
Hamish Ivison
Yizhong Wang
Valentina Pyatkin
Nathan Lambert
Matthew E. Peters
...
Joel Jang
David Wadden
Noah A. Smith
Iz Beltagy
Hanna Hajishirzi
ALM
ELM
24
180
0
17 Nov 2023
Aligning Neural Machine Translation Models: Human Feedback in Training and Inference
Miguel Moura Ramos
Patrick Fernandes
António Farinhas
André F. T. Martins
ALM
16
14
0
15 Nov 2023
Direct Preference Optimization for Neural Machine Translation with Minimum Bayes Risk Decoding
Guangyu Yang
Jinghong Chen
Weizhe Lin
Bill Byrne
24
20
0
14 Nov 2023
Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment
Geyang Guo
Ranchi Zhao
Tianyi Tang
Wayne Xin Zhao
Ji-Rong Wen
ALM
27
27
0
07 Nov 2023
Can LLMs Follow Simple Rules?
Norman Mu
Sarah Chen
Zifan Wang
Sizhe Chen
David Karamardian
Lulwa Aljeraisy
Basel Alomair
Dan Hendrycks
David A. Wagner
ALM
18
26
0
06 Nov 2023
Vanishing Gradients in Reinforcement Finetuning of Language Models
Noam Razin
Hattie Zhou
Omid Saremi
Vimal Thilak
Arwen Bradley
Preetum Nakkiran
Josh Susskind
Etai Littwin
10
7
0
31 Oct 2023
Previous
1
2
3
4
5
Next