ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2308.08998
  4. Cited By
Reinforced Self-Training (ReST) for Language Modeling

Reinforced Self-Training (ReST) for Language Modeling

17 August 2023
Çağlar Gülçehre
T. Paine
S. Srinivasan
Ksenia Konyushkova
L. Weerts
Abhishek Sharma
Aditya Siddhant
Alexa Ahern
Miaosen Wang
Chenjie Gu
Wolfgang Macherey
Arnaud Doucet
Orhan Firat
Nando de Freitas
    OffRL
ArXivPDFHTML

Papers citing "Reinforced Self-Training (ReST) for Language Modeling"

25 / 225 papers shown
Title
Improving Compositional Generalization Using Iterated Learning and
  Simplicial Embeddings
Improving Compositional Generalization Using Iterated Learning and Simplicial Embeddings
Yi Ren
Samuel Lavoie
Mikhail Galkin
Danica J. Sutherland
Aaron Courville
33
15
0
28 Oct 2023
SoK: Memorization in General-Purpose Large Language Models
SoK: Memorization in General-Purpose Large Language Models
Valentin Hartmann
Anshuman Suri
Vincent Bindschaedler
David E. Evans
Shruti Tople
Robert West
KELM
LLMAG
16
20
0
24 Oct 2023
xCOMET: Transparent Machine Translation Evaluation through Fine-grained
  Error Detection
xCOMET: Transparent Machine Translation Evaluation through Fine-grained Error Detection
Nuno M. Guerreiro
Ricardo Rei
Daan van Stigt
Luísa Coheur
Pierre Colombo
André F.T. Martins
40
111
0
16 Oct 2023
SALMON: Self-Alignment with Instructable Reward Models
SALMON: Self-Alignment with Instructable Reward Models
Zhiqing Sun
Yikang Shen
Hongxin Zhang
Qinhong Zhou
Zhenfang Chen
David D. Cox
Yiming Yang
Chuang Gan
ALM
SyDa
19
35
0
09 Oct 2023
$\mathcal{B}$-Coder: Value-Based Deep Reinforcement Learning for Program
  Synthesis
B\mathcal{B}B-Coder: Value-Based Deep Reinforcement Learning for Program Synthesis
Zishun Yu
Yunzhe Tao
Liyu Chen
Tao Sun
Hongxia Yang
24
7
0
04 Oct 2023
Reward Model Ensembles Help Mitigate Overoptimization
Reward Model Ensembles Help Mitigate Overoptimization
Thomas Coste
Usman Anwar
Robert Kirk
David M. Krueger
NoLa
ALM
15
116
0
04 Oct 2023
Enabling Language Models to Implicitly Learn Self-Improvement
Enabling Language Models to Implicitly Learn Self-Improvement
Ziqi Wang
Le Hou
Tianjian Lu
Yuexin Wu
Yunxuan Li
Hongkun Yu
Heng Ji
ReLM
LRM
11
5
0
02 Oct 2023
Parameter-Efficient Tuning Helps Language Model Alignment
Parameter-Efficient Tuning Helps Language Model Alignment
Tianci Xue
Ziqi Wang
Heng Ji
ALM
31
6
0
01 Oct 2023
Pairwise Proximal Policy Optimization: Harnessing Relative Feedback for
  LLM Alignment
Pairwise Proximal Policy Optimization: Harnessing Relative Feedback for LLM Alignment
Tianhao Wu
Banghua Zhu
Ruoyu Zhang
Zhaojin Wen
Kannan Ramchandran
Jiantao Jiao
31
54
0
30 Sep 2023
Alphazero-like Tree-Search can Guide Large Language Model Decoding and
  Training
Alphazero-like Tree-Search can Guide Large Language Model Decoding and Training
Xidong Feng
Ziyu Wan
Muning Wen
Stephen Marcus McAleer
Ying Wen
Weinan Zhang
Jun Wang
LRM
AI4CE
22
151
0
29 Sep 2023
Language Models as a Service: Overview of a New Paradigm and its
  Challenges
Language Models as a Service: Overview of a New Paradigm and its Challenges
Emanuele La Malfa
Aleksandar Petrov
Simon Frieder
Christoph Weinhuber
Ryan Burnell
Raza Nazar
Anthony Cohn
Nigel Shadbolt
Michael Wooldridge
ALM
ELM
30
3
0
28 Sep 2023
MBR and QE Finetuning: Training-time Distillation of the Best and Most
  Expensive Decoding Methods
MBR and QE Finetuning: Training-time Distillation of the Best and Most Expensive Decoding Methods
M. Finkelstein
Subhajit Naskar
Mehdi Mirzazadeh
Apurva Shah
Markus Freitag
45
26
0
19 Sep 2023
Stabilizing RLHF through Advantage Model and Selective Rehearsal
Stabilizing RLHF through Advantage Model and Selective Rehearsal
Baolin Peng
Linfeng Song
Ye Tian
Lifeng Jin
Haitao Mi
Dong Yu
35
17
0
18 Sep 2023
PDFTriage: Question Answering over Long, Structured Documents
PDFTriage: Question Answering over Long, Structured Documents
Jon Saad-Falcon
Joe Barrow
Alexa F. Siu
A. Nenkova
David Seunghyun Yoon
Ryan A. Rossi
Franck Dernoncourt
RALM
22
19
0
16 Sep 2023
Statistical Rejection Sampling Improves Preference Optimization
Statistical Rejection Sampling Improves Preference Optimization
Tianqi Liu
Yao-Min Zhao
Rishabh Joshi
Misha Khalman
Mohammad Saleh
Peter J. Liu
Jialu Liu
33
210
0
13 Sep 2023
Mitigating the Alignment Tax of RLHF
Mitigating the Alignment Tax of RLHF
Yong Lin
Hangyu Lin
Wei Xiong
Shizhe Diao
Zeming Zheng
...
Han Zhao
Nan Jiang
Heng Ji
Yuan Yao
Tong Zhang
MoMe
CLL
29
63
0
12 Sep 2023
Automatically Correcting Large Language Models: Surveying the landscape
  of diverse self-correction strategies
Automatically Correcting Large Language Models: Surveying the landscape of diverse self-correction strategies
Liangming Pan
Michael Stephen Saxon
Wenda Xu
Deepak Nathani
Xinyi Wang
William Yang Wang
KELM
LRM
36
201
0
06 Aug 2023
Linear Alignment of Vision-language Models for Image Captioning
Linear Alignment of Vision-language Models for Image Captioning
Fabian Paischer
M. Hofmarcher
Sepp Hochreiter
Thomas Adler
CLIP
VLM
42
0
0
10 Jul 2023
Provably Efficient Iterated CVaR Reinforcement Learning with Function
  Approximation and Human Feedback
Provably Efficient Iterated CVaR Reinforcement Learning with Function Approximation and Human Feedback
Yu Chen
Yihan Du
Pihe Hu
Si-Yi Wang
De-hui Wu
Longbo Huang
24
6
0
06 Jul 2023
Sparks of Artificial General Intelligence: Early experiments with GPT-4
Sparks of Artificial General Intelligence: Early experiments with GPT-4
Sébastien Bubeck
Varun Chandrasekaran
Ronen Eldan
J. Gehrke
Eric Horvitz
...
Scott M. Lundberg
Harsha Nori
Hamid Palangi
Marco Tulio Ribeiro
Yi Zhang
ELM
AI4MH
AI4CE
ALM
268
3,000
0
22 Mar 2023
Improving alignment of dialogue agents via targeted human judgements
Improving alignment of dialogue agents via targeted human judgements
Amelia Glaese
Nat McAleese
Maja Trkebacz
John Aslanides
Vlad Firoiu
...
John F. J. Mellor
Demis Hassabis
Koray Kavukcuoglu
Lisa Anne Hendricks
G. Irving
ALM
AAML
225
500
0
28 Sep 2022
Defining and Characterizing Reward Hacking
Defining and Characterizing Reward Hacking
Joar Skalse
Nikolaus H. R. Howe
Dmitrii Krasheninnikov
David M. Krueger
57
54
0
27 Sep 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
308
11,909
0
04 Mar 2022
Self-Training: A Survey
Self-Training: A Survey
Massih-Reza Amini
Vasilii Feofanov
Loïc Pauletto
Lies Hadjadj
Emilie Devijver
Yury Maximov
SSL
26
102
0
24 Feb 2022
Revisiting Self-Training for Neural Sequence Generation
Revisiting Self-Training for Neural Sequence Generation
Junxian He
Jiatao Gu
Jiajun Shen
MarcÁurelio Ranzato
SSL
LRM
242
269
0
30 Sep 2019
Previous
12345