Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2210.01241
Cited By
Is Reinforcement Learning (Not) for Natural Language Processing: Benchmarks, Baselines, and Building Blocks for Natural Language Policy Optimization
3 October 2022
Rajkumar Ramamurthy
Prithviraj Ammanabrolu
Kianté Brantley
Jack Hessel
R. Sifa
Christian Bauckhage
Hannaneh Hajishirzi
Yejin Choi
OffRL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Is Reinforcement Learning (Not) for Natural Language Processing: Benchmarks, Baselines, and Building Blocks for Natural Language Policy Optimization"
50 / 202 papers shown
Title
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Stephen Casper
Xander Davies
Claudia Shi
T. Gilbert
Jérémy Scheurer
...
Erdem Biyik
Anca Dragan
David M. Krueger
Dorsa Sadigh
Dylan Hadfield-Menell
ALM
OffRL
23
436
0
27 Jul 2023
On the Effectiveness of Offline RL for Dialogue Response Generation
Paloma Sodhi
Felix Wu
Ethan R. Elenberg
Kilian Q. Weinberger
Ryan T. McDonald
OffRL
8
5
0
23 Jul 2023
FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets
Seonghyeon Ye
Doyoung Kim
Sungdong Kim
Hyeonbin Hwang
Seungone Kim
Yongrae Jo
James Thorne
Juho Kim
Minjoon Seo
ALM
19
96
0
20 Jul 2023
Assessing the efficacy of large language models in generating accurate teacher responses
Yann Hicke
Abhishek Masand
Wentao Guo
Tushaar Gangavarapu
ELM
AI4Ed
19
9
0
09 Jul 2023
Censored Sampling of Diffusion Models Using 3 Minutes of Human Feedback
Taeho Yoon
Kibeom Myoung
Keon Lee
Jaewoong Cho
Albert No
Ernest K. Ryu
8
6
0
06 Jul 2023
Let Me Teach You: Pedagogical Foundations of Feedback for Language Models
Beatriz Borges
Niket Tandon
Tanja Kaser
Antoine Bosselut
17
3
0
01 Jul 2023
Learning to Generate Better Than Your LLM
Jonathan D. Chang
Kianté Brantley
Rajkumar Ramamurthy
Dipendra Kumar Misra
Wen Sun
14
39
0
20 Jun 2023
Learning Profitable NFT Image Diffusions via Multiple Visual-Policy Guided Reinforcement Learning
Huiguo He
Tianfu Wang
Huan Yang
Jianlong Fu
N. Yuan
Jian Yin
Hongyang Chao
Qi Zhang
EGVM
17
9
0
20 Jun 2023
MiniLLM: Knowledge Distillation of Large Language Models
Yuxian Gu
Li Dong
Furu Wei
Minlie Huang
ALM
18
75
0
14 Jun 2023
The BEA 2023 Shared Task on Generating AI Teacher Responses in Educational Dialogues
Anaïs Tack
E. Kochmar
Zheng Yuan
Serge Bibauw
Chris Piech
17
16
0
12 Jun 2023
Fine-Tuning Language Models with Advantage-Induced Policy Alignment
Banghua Zhu
Hiteshi Sharma
Felipe Vieira Frujeri
Shi Dong
Chenguang Zhu
Michael I. Jordan
Jiantao Jiao
OSLM
18
33
0
04 Jun 2023
Fine-Grained Human Feedback Gives Better Rewards for Language Model Training
Zeqiu Wu
Yushi Hu
Weijia Shi
Nouha Dziri
Alane Suhr
Prithviraj Ammanabrolu
Noah A. Smith
Mari Ostendorf
Hannaneh Hajishirzi
ALM
11
301
0
02 Jun 2023
Preference-grounded Token-level Guidance for Language Model Fine-tuning
Shentao Yang
Shujian Zhang
Congying Xia
Yihao Feng
Caiming Xiong
Mi Zhou
16
22
0
01 Jun 2023
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Rafael Rafailov
Archit Sharma
E. Mitchell
Stefano Ermon
Christopher D. Manning
Chelsea Finn
ALM
17
3,268
0
29 May 2023
Provable Reward-Agnostic Preference-Based Reinforcement Learning
Wenhao Zhan
Masatoshi Uehara
Wen Sun
Jason D. Lee
14
7
0
29 May 2023
Inference-Time Policy Adapters (IPA): Tailoring Extreme-Scale LMs without Fine-tuning
Ximing Lu
Faeze Brahman
Peter West
Jaehun Jang
Khyathi Raghavi Chandu
...
Bill Yuchen Lin
Skyler Hallinan
Xiang Ren
Sean Welleck
Yejin Choi
10
26
0
24 May 2023
Provable Offline Preference-Based Reinforcement Learning
Wenhao Zhan
Masatoshi Uehara
Nathan Kallus
Jason D. Lee
Wen Sun
OffRL
19
12
0
24 May 2023
Leftover Lunch: Advantage-based Offline Reinforcement Learning for Language Models
Ashutosh Baheti
Ximing Lu
Faeze Brahman
Ronan Le Bras
Maarten Sap
Mark O. Riedl
15
9
0
24 May 2023
DecipherPref: Analyzing Influential Factors in Human Preference Judgments via GPT-4
Ye Hu
Kaiqiang Song
Sangwoo Cho
Xiaoyang Wang
H. Foroosh
Fei Liu
11
7
0
24 May 2023
Query Rewriting for Retrieval-Augmented Large Language Models
Xinbei Ma
Yeyun Gong
Pengcheng He
Hai Zhao
Nan Duan
KELM
LRM
19
99
0
23 May 2023
AlpacaFarm: A Simulation Framework for Methods that Learn from Human Feedback
Yann Dubois
Xuechen Li
Rohan Taori
Tianyi Zhang
Ishaan Gulrajani
Jimmy Ba
Carlos Guestrin
Percy Liang
Tatsunori B. Hashimoto
ALM
14
531
0
22 May 2023
A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation
Xiaowei Huang
Wenjie Ruan
Wei Huang
Gao Jin
Yizhen Dong
...
Sihao Wu
Peipei Xu
Dengyu Wu
André Freitas
Mustafa A. Mustafa
ALM
21
81
0
19 May 2023
LeTI: Learning to Generate from Textual Interactions
Xingyao Wang
Hao Peng
Reyhaneh Jabbarvand
Heng Ji
14
30
0
17 May 2023
RL4F: Generating Natural Language Feedback with Reinforcement Learning for Repairing Model Outputs
Afra Feyza Akyürek
Ekin Akyürek
Aman Madaan
A. Kalyan
Peter Clark
Derry Wijaya
Niket Tandon
ALM
KELM
21
85
0
15 May 2023
Personalized Abstractive Summarization by Tri-agent Generation Pipeline
Md Aminul Haque Palash
Sourav Saha
Faria Afrin
Pengcheng He
21
4
0
04 May 2023
Beyond Prompts: Exploring the Design Space of Mixed-Initiative Co-Creativity Systems
Zhiyu Lin
Upol Ehsan
Rohan Agarwal
Samihan Dani
Vidushi Vashishth
Mark O. Riedl
10
20
0
03 May 2023
Mitigating Approximate Memorization in Language Models via Dissimilarity Learned Policy
Aly M. Kassem
21
2
0
02 May 2023
RAFT: Reward rAnked FineTuning for Generative Foundation Model Alignment
Hanze Dong
Wei Xiong
Deepanshu Goyal
Yihan Zhang
Winnie Chow
Rui Pan
Shizhe Diao
Jipeng Zhang
Kashun Shum
Tong Zhang
ALM
4
397
0
13 Apr 2023
RRHF: Rank Responses to Align Language Models with Human Feedback without tears
Zheng Yuan
Hongyi Yuan
Chuanqi Tan
Wei Wang
Songfang Huang
Feiran Huang
ALM
15
341
0
11 Apr 2023
REFINER: Reasoning Feedback on Intermediate Representations
Debjit Paul
Mete Ismayilzada
Maxime Peyrard
Beatriz Borges
Antoine Bosselut
Robert West
Boi Faltings
ReLM
LRM
14
168
0
04 Apr 2023
disco: a toolkit for Distributional Control of Generative Models
Germán Kruszewski
Jos Rozen
Marc Dymetman
11
4
0
08 Mar 2023
A Comprehensive Survey of AI-Generated Content (AIGC): A History of Generative AI from GAN to ChatGPT
Yihan Cao
Siyu Li
Yixin Liu
Zhiling Yan
Yutong Dai
Philip S. Yu
Lichao Sun
19
493
0
07 Mar 2023
Guiding Large Language Models via Directional Stimulus Prompting
Zekun Li
Baolin Peng
Pengcheng He
Michel Galley
Jianfeng Gao
Xi Yan
LLMAG
LRM
LM&Ro
21
91
0
22 Feb 2023
Augmented Language Models: a Survey
Grégoire Mialon
Roberto Dessì
Maria Lomeli
Christoforos Nalmpantis
Ramakanth Pasunuru
...
Jane Dwivedi-Yu
Asli Celikyilmaz
Edouard Grave
Yann LeCun
Thomas Scialom
LRM
KELM
14
362
0
15 Feb 2023
Grounding Large Language Models in Interactive Environments with Online Reinforcement Learning
Thomas Carta
Clément Romac
Thomas Wolf
Sylvain Lamprier
Olivier Sigaud
Pierre-Yves Oudeyer
LM&Ro
LLMAG
9
106
0
06 Feb 2023
Direct Preference-based Policy Optimization without Reward Modeling
Gaon An
Junhyeok Lee
Xingdong Zuo
Norio Kosaka
KyungHyun Kim
Hyun Oh Song
OffRL
11
24
0
30 Jan 2023
Principled Reinforcement Learning with Human Feedback from Pairwise or
K
K
K
-wise Comparisons
Banghua Zhu
Jiantao Jiao
Michael I. Jordan
OffRL
12
119
0
26 Jan 2023
Critic-Guided Decoding for Controlled Text Generation
Minbeom Kim
Hwanhee Lee
Kang Min Yoo
Joonsuk Park
Hwaran Lee
Kyomin Jung
18
35
0
21 Dec 2022
ClarifyDelphi: Reinforced Clarification Questions with Defeasibility Rewards for Social and Moral Situations
Valentina Pyatkin
Jena D. Hwang
Vivek Srikumar
Ximing Lu
Liwei Jiang
Yejin Choi
Chandra Bhagavatula
16
21
0
20 Dec 2022
I Cast Detect Thoughts: Learning to Converse and Guide with Intents and Theory-of-Mind in Dungeons and Dragons
Pei Zhou
Andrew Zhu
Jennifer Hu
Jay Pujara
Xiang Ren
Chris Callison-Burch
Yejin Choi
Prithviraj Ammanabrolu
11
25
0
20 Dec 2022
Continual Learning for Instruction Following from Realtime Feedback
Alane Suhr
Yoav Artzi
9
17
0
19 Dec 2022
On Second Thought, Let's Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning
Omar Shaikh
Hongxin Zhang
William B. Held
Michael S. Bernstein
Diyi Yang
ReLM
LRM
22
181
0
15 Dec 2022
KRLS: Improving End-to-End Response Generation in Task Oriented Dialog with Reinforced Keywords Learning
Xiao Yu
Qingyang Wu
Kun Qian
Zhou Yu
OffRL
10
10
0
30 Nov 2022
Reward Gaming in Conditional Text Generation
Richard Yuanzhe Pang
Vishakh Padmakumar
Thibault Sellam
Ankur P. Parikh
He He
13
24
0
16 Nov 2022
Language Generation Models Can Cause Harm: So What Can We Do About It? An Actionable Survey
Sachin Kumar
Vidhisha Balachandran
Lucille Njoo
Antonios Anastasopoulos
Yulia Tsvetkov
ELM
58
59
0
14 Oct 2022
Offline RL for Natural Language Generation with Implicit Language Q Learning
Charles Burton Snell
Ilya Kostrikov
Yi Su
Mengjiao Yang
Sergey Levine
OffRL
119
101
0
05 Jun 2022
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
Relating Neural Text Degeneration to Exposure Bias
Ting-Rui Chiang
Yun-Nung Chen
45
17
0
17 Sep 2021
Fine-Tuning Language Models from Human Preferences
Daniel M. Ziegler
Nisan Stiennon
Jeff Wu
Tom B. Brown
Alec Radford
Dario Amodei
Paul Christiano
G. Irving
ALM
273
1,561
0
18 Sep 2019
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu
M. Schuster
Z. Chen
Quoc V. Le
Mohammad Norouzi
...
Alex Rudnick
Oriol Vinyals
G. Corrado
Macduff Hughes
J. Dean
AIMat
716
6,435
0
26 Sep 2016
Previous
1
2
3
4
5
Next