ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.07295
  4. Cited By
Multi-objective Reinforcement learning from AI Feedback

Multi-objective Reinforcement learning from AI Feedback

11 June 2024
Marcus Williams
ArXivPDFHTML

Papers citing "Multi-objective Reinforcement learning from AI Feedback"

3 / 3 papers shown
Title
Optimizing Safe and Aligned Language Generation: A Multi-Objective GRPO Approach
Optimizing Safe and Aligned Language Generation: A Multi-Objective GRPO Approach
Xuying Li
Zhuo Li
Yuji Kosuga
Victor Bian
45
1
0
26 Mar 2025
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
303
11,881
0
04 Mar 2022
Fine-Tuning Language Models from Human Preferences
Fine-Tuning Language Models from Human Preferences
Daniel M. Ziegler
Nisan Stiennon
Jeff Wu
Tom B. Brown
Alec Radford
Dario Amodei
Paul Christiano
G. Irving
ALM
275
1,583
0
18 Sep 2019
1