ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.11971
  4. Cited By
Improving Generalization of Alignment with Human Preferences through
  Group Invariant Learning

Improving Generalization of Alignment with Human Preferences through Group Invariant Learning

18 October 2023
Rui Zheng
Wei Shen
Yuan Hua
Wenbin Lai
Shihan Dou
Yuhao Zhou
Zhiheng Xi
Xiao Wang
Haoran Huang
Tao Gui
Qi Zhang
Xuanjing Huang
ArXivPDFHTML

Papers citing "Improving Generalization of Alignment with Human Preferences through Group Invariant Learning"

9 / 9 papers shown
Title
A Survey on Progress in LLM Alignment from the Perspective of Reward Design
A Survey on Progress in LLM Alignment from the Perspective of Reward Design
Miaomiao Ji
Yanqiu Wu
Zhibin Wu
Shoujin Wang
Jian Yang
Mark Dras
Usman Naseem
29
0
0
05 May 2025
RMB: Comprehensively Benchmarking Reward Models in LLM Alignment
RMB: Comprehensively Benchmarking Reward Models in LLM Alignment
Enyu Zhou
Guodong Zheng
B. Wang
Zhiheng Xi
Shihan Dou
...
Yurong Mou
Rui Zheng
Tao Gui
Qi Zhang
Xuanjing Huang
ALM
44
13
0
13 Oct 2024
SPA-VL: A Comprehensive Safety Preference Alignment Dataset for Vision Language Model
SPA-VL: A Comprehensive Safety Preference Alignment Dataset for Vision Language Model
Yongting Zhang
Lu Chen
Guodong Zheng
Yifeng Gao
Rui Zheng
...
Yu Qiao
Xuanjing Huang
Feng Zhao
Tao Gui
Jing Shao
VLM
55
22
0
17 Jun 2024
Group Distributionally Robust Reinforcement Learning with Hierarchical
  Latent Variables
Group Distributionally Robust Reinforcement Learning with Hierarchical Latent Variables
Mengdi Xu
Peide Huang
Yaru Niu
Visak C. V. Kumar
Jielin Qiu
...
Kuan-Hui Lee
Xuewei Qi
H. Lam
Bo-wen Li
Ding Zhao
OOD
37
8
0
21 Oct 2022
Improving alignment of dialogue agents via targeted human judgements
Improving alignment of dialogue agents via targeted human judgements
Amelia Glaese
Nat McAleese
Maja Trkebacz
John Aslanides
Vlad Firoiu
...
John F. J. Mellor
Demis Hassabis
Koray Kavukcuoglu
Lisa Anne Hendricks
G. Irving
ALM
AAML
214
495
0
28 Sep 2022
Defining and Characterizing Reward Hacking
Defining and Characterizing Reward Hacking
Joar Skalse
Nikolaus H. R. Howe
Dmitrii Krasheninnikov
David M. Krueger
50
53
0
27 Sep 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
8,441
0
04 Mar 2022
Out-of-Distribution Generalization via Risk Extrapolation (REx)
Out-of-Distribution Generalization via Risk Extrapolation (REx)
David M. Krueger
Ethan Caballero
J. Jacobsen
Amy Zhang
Jonathan Binas
Dinghuai Zhang
Rémi Le Priol
Aaron Courville
OOD
210
888
0
02 Mar 2020
Fine-Tuning Language Models from Human Preferences
Fine-Tuning Language Models from Human Preferences
Daniel M. Ziegler
Nisan Stiennon
Jeff Wu
Tom B. Brown
Alec Radford
Dario Amodei
Paul Christiano
G. Irving
ALM
273
1,151
0
18 Sep 2019
1