ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.04046
  4. Cited By
Reinforcement Learning Fine-tuning of Language Models is Biased Towards
  More Extractable Features

Reinforcement Learning Fine-tuning of Language Models is Biased Towards More Extractable Features

7 November 2023
Diogo Cruz
Edoardo Pona
Alex Holness-Tofts
Elias Schmied
Víctor Abia Alonso
Charlie Griffin
B. Cirstea
ArXivPDFHTML

Papers citing "Reinforcement Learning Fine-tuning of Language Models is Biased Towards More Extractable Features"

3 / 3 papers shown
Title
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
311
11,915
0
04 Mar 2022
Measuring and Improving Consistency in Pretrained Language Models
Measuring and Improving Consistency in Pretrained Language Models
Yanai Elazar
Nora Kassner
Shauli Ravfogel
Abhilasha Ravichander
Eduard H. Hovy
Hinrich Schütze
Yoav Goldberg
HILM
260
346
0
01 Feb 2021
Fine-Tuning Language Models from Human Preferences
Fine-Tuning Language Models from Human Preferences
Daniel M. Ziegler
Nisan Stiennon
Jeff Wu
Tom B. Brown
Alec Radford
Dario Amodei
Paul Christiano
G. Irving
ALM
277
1,587
0
18 Sep 2019
1