ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2204.05862
  4. Cited By
Training a Helpful and Harmless Assistant with Reinforcement Learning
  from Human Feedback

Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback

12 April 2022
Yuntao Bai
Andy Jones
Kamal Ndousse
Amanda Askell
Anna Chen
Nova Dassarma
Dawn Drain
Stanislav Fort
Deep Ganguli
T. Henighan
Nicholas Joseph
Saurav Kadavath
John Kernion
Tom Conerly
S. E. Showk
Nelson Elhage
Zac Hatfield-Dodds
Danny Hernandez
Tristan Hume
Scott Johnston
Shauna Kravec
Liane Lovitt
Neel Nanda
Catherine Olsson
Dario Amodei
Tom B. Brown
Jack Clark
Sam McCandlish
C. Olah
Benjamin Mann
Jared Kaplan
ArXivPDFHTML

Papers citing "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"

50 / 1,795 papers shown
Title
SteerLM: Attribute Conditioned SFT as an (User-Steerable) Alternative to
  RLHF
SteerLM: Attribute Conditioned SFT as an (User-Steerable) Alternative to RLHF
Yi Dong
Zhilin Wang
Makesh Narsimhan Sreedhar
Xianchao Wu
Oleksii Kuchaiev
ALM
LLMSV
34
64
0
09 Oct 2023
Loose lips sink ships: Mitigating Length Bias in Reinforcement Learning
  from Human Feedback
Loose lips sink ships: Mitigating Length Bias in Reinforcement Learning from Human Feedback
Wei Shen
Rui Zheng
Wenyu Zhan
Jun Zhao
Shihan Dou
Tao Gui
Qi Zhang
Xuanjing Huang
ALM
40
40
0
08 Oct 2023
FakeGPT: Fake News Generation, Explanation and Detection of Large
  Language Models
FakeGPT: Fake News Generation, Explanation and Detection of Large Language Models
Yue Huang
Lichao Sun
27
6
0
08 Oct 2023
Balancing Specialized and General Skills in LLMs: The Impact of Modern
  Tuning and Data Strategy
Balancing Specialized and General Skills in LLMs: The Impact of Modern Tuning and Data Strategy
Zhengwu Zhang
Chen Zheng
Da Tang
Ke Sun
Yukun Ma
Yingtong Bu
Xun Zhou
Liang Zhao
ALM
29
30
0
07 Oct 2023
Beyond Text: A Deep Dive into Large Language Models' Ability on
  Understanding Graph Data
Beyond Text: A Deep Dive into Large Language Models' Ability on Understanding Graph Data
Yuntong Hu
Zhengwu Zhang
Liang Zhao
GNN
26
23
0
07 Oct 2023
Large Language Models for Spatial Trajectory Patterns Mining
Large Language Models for Spatial Trajectory Patterns Mining
Zhengwu Zhang
Hossein Amiri
Zhenke Liu
Andreas Züfle
Liang Zhao
14
18
0
07 Oct 2023
Reward Dropout Improves Control: Bi-objective Perspective on Reinforced
  LM
Reward Dropout Improves Control: Bi-objective Perspective on Reinforced LM
Changhun Lee
Chiehyeon Lim
21
0
0
06 Oct 2023
A Long Way to Go: Investigating Length Correlations in RLHF
A Long Way to Go: Investigating Length Correlations in RLHF
Prasann Singhal
Tanya Goyal
Jiacheng Xu
Greg Durrett
34
140
0
05 Oct 2023
Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct
  Preference Optimization
Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization
Zhanhui Zhou
Jie Liu
Chao Yang
Jing Shao
Yu Liu
Xiangyu Yue
Wanli Ouyang
Yu Qiao
24
47
0
05 Oct 2023
Fine-tuning Aligned Language Models Compromises Safety, Even When Users
  Do Not Intend To!
Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!
Xiangyu Qi
Yi Zeng
Tinghao Xie
Pin-Yu Chen
Ruoxi Jia
Prateek Mittal
Peter Henderson
SILM
44
523
0
05 Oct 2023
Evaluating Hallucinations in Chinese Large Language Models
Evaluating Hallucinations in Chinese Large Language Models
Qinyuan Cheng
Tianxiang Sun
Wenwei Zhang
Siyin Wang
Xiangyang Liu
...
Junliang He
Mianqiu Huang
Zhangyue Yin
Kai Chen
Xipeng Qiu
HILM
ELM
33
23
0
05 Oct 2023
$\mathcal{B}$-Coder: Value-Based Deep Reinforcement Learning for Program
  Synthesis
B\mathcal{B}B-Coder: Value-Based Deep Reinforcement Learning for Program Synthesis
Zishun Yu
Yunzhe Tao
Liyu Chen
Tao Sun
Hongxia Yang
24
7
0
04 Oct 2023
JsonTuning: Towards Generalizable, Robust, and Controllable Instruction Tuning
JsonTuning: Towards Generalizable, Robust, and Controllable Instruction Tuning
Chang Gao
Wenxuan Zhang
Guizhen Chen
Wai Lam
49
5
0
04 Oct 2023
Reward Model Ensembles Help Mitigate Overoptimization
Reward Model Ensembles Help Mitigate Overoptimization
Thomas Coste
Usman Anwar
Robert Kirk
David M. Krueger
NoLa
ALM
15
116
0
04 Oct 2023
The Empty Signifier Problem: Towards Clearer Paradigms for
  Operationalising "Alignment" in Large Language Models
The Empty Signifier Problem: Towards Clearer Paradigms for Operationalising "Alignment" in Large Language Models
Hannah Rose Kirk
Bertie Vidgen
Paul Röttger
Scott A. Hale
39
2
0
03 Oct 2023
Learning Optimal Advantage from Preferences and Mistaking it for Reward
Learning Optimal Advantage from Preferences and Mistaking it for Reward
W. B. Knox
Stephane Hatgis-Kessell
Sigurdur O. Adalgeirsson
Serena Booth
Anca D. Dragan
Peter Stone
S. Niekum
14
12
0
03 Oct 2023
Low-Resource Languages Jailbreak GPT-4
Low-Resource Languages Jailbreak GPT-4
Zheng-Xin Yong
Cristina Menghini
Stephen H. Bach
SILM
23
169
0
03 Oct 2023
Jailbreaker in Jail: Moving Target Defense for Large Language Models
Jailbreaker in Jail: Moving Target Defense for Large Language Models
Bocheng Chen
Advait Paliwal
Qiben Yan
AAML
32
14
0
03 Oct 2023
Reinforcement Learning from Automatic Feedback for High-Quality Unit Test Generation
Reinforcement Learning from Automatic Feedback for High-Quality Unit Test Generation
Benjamin Steenhoek
Michele Tufano
Neel Sundaresan
Alexey Svyatkovskiy
OffRL
ALM
47
17
0
03 Oct 2023
Automatic Pair Construction for Contrastive Post-training
Automatic Pair Construction for Contrastive Post-training
Canwen Xu
Corby Rosset
Ethan C. Chau
Luciano Del Corro
Shweti Mahajan
Julian McAuley
Jennifer Neville
Ahmed Hassan Awadallah
Nikhil Rao
ALM
8
4
0
03 Oct 2023
Dimensions of Disagreement: Unpacking Divergence and Misalignment in
  Cognitive Science and Artificial Intelligence
Dimensions of Disagreement: Unpacking Divergence and Misalignment in Cognitive Science and Artificial Intelligence
Kerem Oktar
Ilia Sucholutsky
Tania Lombrozo
Thomas L. Griffiths
AI4CE
47
3
0
03 Oct 2023
Ask Again, Then Fail: Large Language Models' Vacillations in Judgment
Ask Again, Then Fail: Large Language Models' Vacillations in Judgment
Qiming Xie
Zengzhi Wang
Yi Feng
Rui Xia
AAML
HILM
25
9
0
03 Oct 2023
Conceptual Framework for Autonomous Cognitive Entities
Conceptual Framework for Autonomous Cognitive Entities
David Shapiro
Wangfan Li
Manuel Delaflor
Carlos Toxtli
28
1
0
03 Oct 2023
TWIZ-v2: The Wizard of Multimodal Conversational-Stimulus
TWIZ-v2: The Wizard of Multimodal Conversational-Stimulus
Rafael Ferreira
Diogo Tavares
Diogo Glória-Silva
Rodrigo Valerio
João Bordalo
Ines Simoes
Vasco Ramos
David Semedo
João Magalhães
17
4
0
03 Oct 2023
What's the Magic Word? A Control Theory of LLM Prompting
What's the Magic Word? A Control Theory of LLM Prompting
Aman Bhargava
Cameron Witkowski
Manav Shah
Matt W. Thomson
LLMAG
51
29
0
02 Oct 2023
SmartPlay: A Benchmark for LLMs as Intelligent Agents
SmartPlay: A Benchmark for LLMs as Intelligent Agents
Yue Wu
Xuan Tang
Tom Michael Mitchell
Yuanzhi Li
ELM
LLMAG
27
63
0
02 Oct 2023
Tool-Augmented Reward Modeling
Tool-Augmented Reward Modeling
Lei Li
Yekun Chai
Shuohuan Wang
Yu Sun
Hao Tian
Ningyu Zhang
Hua-Hong Wu
OffRL
38
13
0
02 Oct 2023
The Participatory Turn in AI Design: Theoretical Foundations and the
  Current State of Practice
The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice
Fernando Delgado
Stephen Yang
Michael A. Madaio
Qian Yang
71
100
0
02 Oct 2023
All Languages Matter: On the Multilingual Safety of Large Language
  Models
All Languages Matter: On the Multilingual Safety of Large Language Models
Wenxuan Wang
Zhaopeng Tu
Chang Chen
Youliang Yuan
Jen-tse Huang
Wenxiang Jiao
Michael R. Lyu
ALM
LRM
39
31
0
02 Oct 2023
Enabling Language Models to Implicitly Learn Self-Improvement
Enabling Language Models to Implicitly Learn Self-Improvement
Ziqi Wang
Le Hou
Tianjian Lu
Yuexin Wu
Yunxuan Li
Hongkun Yu
Heng Ji
ReLM
LRM
6
5
0
02 Oct 2023
Parameter-Efficient Tuning Helps Language Model Alignment
Parameter-Efficient Tuning Helps Language Model Alignment
Tianci Xue
Ziqi Wang
Heng Ji
ALM
31
6
0
01 Oct 2023
Beyond Task Performance: Evaluating and Reducing the Flaws of Large
  Multimodal Models with In-Context Learning
Beyond Task Performance: Evaluating and Reducing the Flaws of Large Multimodal Models with In-Context Learning
Mustafa Shukor
Alexandre Ramé
Corentin Dancette
Matthieu Cord
LRM
MLLM
38
20
0
01 Oct 2023
Adapting LLM Agents with Universal Feedback in Communication
Adapting LLM Agents with Universal Feedback in Communication
Kuan-Chieh Jackson Wang
Yadong Lu
Michael Santacroce
Yeyun Gong
Chao Zhang
Yelong Shen
LLMAG
28
7
0
01 Oct 2023
SELF: Self-Evolution with Language Feedback
SELF: Self-Evolution with Language Feedback
Jianqiao Lu
Wanjun Zhong
Wenyong Huang
Yufei Wang
Qi Zhu
...
Weichao Wang
Xingshan Zeng
Lifeng Shang
Xin Jiang
Qun Liu
LRM
SyDa
16
6
0
01 Oct 2023
From Language Modeling to Instruction Following: Understanding the
  Behavior Shift in LLMs after Instruction Tuning
From Language Modeling to Instruction Following: Understanding the Behavior Shift in LLMs after Instruction Tuning
Xuansheng Wu
Wenlin Yao
Jianshu Chen
Xiaoman Pan
Xiaoyang Wang
Ninghao Liu
Dong Yu
LRM
20
26
0
30 Sep 2023
Corex: Pushing the Boundaries of Complex Reasoning through Multi-Model
  Collaboration
Corex: Pushing the Boundaries of Complex Reasoning through Multi-Model Collaboration
Qiushi Sun
Zhangyue Yin
Xiang Li
Zhiyong Wu
Xipeng Qiu
Lingpeng Kong
LRM
LLMAG
20
44
0
30 Sep 2023
Pairwise Proximal Policy Optimization: Harnessing Relative Feedback for
  LLM Alignment
Pairwise Proximal Policy Optimization: Harnessing Relative Feedback for LLM Alignment
Tianhao Wu
Banghua Zhu
Ruoyu Zhang
Zhaojin Wen
Kannan Ramchandran
Jiantao Jiao
31
54
0
30 Sep 2023
Directly Fine-Tuning Diffusion Models on Differentiable Rewards
Directly Fine-Tuning Diffusion Models on Differentiable Rewards
Amita Gajewar
Paul Vicol
G. Bansal
David J Fleet
16
145
0
29 Sep 2023
LoRA ensembles for large language model fine-tuning
LoRA ensembles for large language model fine-tuning
Xi Wang
Laurence Aitchison
Maja Rudolph
UQCV
16
19
0
29 Sep 2023
Building Privacy-Preserving and Secure Geospatial Artificial
  Intelligence Foundation Models
Building Privacy-Preserving and Secure Geospatial Artificial Intelligence Foundation Models
Jinmeng Rao
Song Gao
Gengchen Mai
Joanna M. Wardlaw
16
19
0
29 Sep 2023
Qwen Technical Report
Qwen Technical Report
Jinze Bai
Shuai Bai
Yunfei Chu
Zeyu Cui
Kai Dang
...
Zhenru Zhang
Chang Zhou
Jingren Zhou
Xiaohuan Zhou
Tianhang Zhu
OSLM
29
1,568
0
28 Sep 2023
Language Models as a Service: Overview of a New Paradigm and its
  Challenges
Language Models as a Service: Overview of a New Paradigm and its Challenges
Emanuele La Malfa
Aleksandar Petrov
Simon Frieder
Christoph Weinhuber
Ryan Burnell
Raza Nazar
Anthony Cohn
Nigel Shadbolt
Michael Wooldridge
ALM
ELM
30
3
0
28 Sep 2023
Beyond Reverse KL: Generalizing Direct Preference Optimization with
  Diverse Divergence Constraints
Beyond Reverse KL: Generalizing Direct Preference Optimization with Diverse Divergence Constraints
Chaoqi Wang
Yibo Jiang
Yuguang Yang
Han Liu
Yuxin Chen
19
81
0
28 Sep 2023
The Trickle-down Impact of Reward (In-)consistency on RLHF
The Trickle-down Impact of Reward (In-)consistency on RLHF
Lingfeng Shen
Sihao Chen
Linfeng Song
Lifeng Jin
Baolin Peng
Haitao Mi
Daniel Khashabi
Dong Yu
18
21
0
28 Sep 2023
Don't throw away your value model! Generating more preferable text with
  Value-Guided Monte-Carlo Tree Search decoding
Don't throw away your value model! Generating more preferable text with Value-Guided Monte-Carlo Tree Search decoding
Jiacheng Liu
Andrew Cohen
Ramakanth Pasunuru
Yejin Choi
Hannaneh Hajishirzi
Asli Celikyilmaz
16
22
0
26 Sep 2023
Large Language Model Alignment: A Survey
Large Language Model Alignment: A Survey
Tianhao Shen
Renren Jin
Yufei Huang
Chuang Liu
Weilong Dong
Zishan Guo
Xinwei Wu
Yan Liu
Deyi Xiong
LM&MA
14
177
0
26 Sep 2023
Aligning Large Multimodal Models with Factually Augmented RLHF
Aligning Large Multimodal Models with Factually Augmented RLHF
Zhiqing Sun
Sheng Shen
Shengcao Cao
Haotian Liu
Chunyuan Li
...
Liangyan Gui
Yu-xiong Wang
Yiming Yang
Kurt Keutzer
Trevor Darrell
VLM
39
312
0
25 Sep 2023
Identifying the Risks of LM Agents with an LM-Emulated Sandbox
Identifying the Risks of LM Agents with an LM-Emulated Sandbox
Yangjun Ruan
Honghua Dong
Andrew Wang
Silviu Pitis
Yongchao Zhou
Jimmy Ba
Yann Dubois
Chris J. Maddison
Tatsunori Hashimoto
LLMAG
ELM
8
95
0
25 Sep 2023
Can LLM-Generated Misinformation Be Detected?
Can LLM-Generated Misinformation Be Detected?
Canyu Chen
Kai Shu
DeLMO
29
158
0
25 Sep 2023
Creativity Support in the Age of Large Language Models: An Empirical
  Study Involving Emerging Writers
Creativity Support in the Age of Large Language Models: An Empirical Study Involving Emerging Writers
Tuhin Chakrabarty
Vishakh Padmakumar
Faeze Brahman
Smaranda Muresan
50
35
0
22 Sep 2023
Previous
123...293031...343536
Next