Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2407.04185
Cited By
HAF-RM: A Hybrid Alignment Framework for Reward Model Training
4 July 2024
Shujun Liu
Xiaoyu Shen
Yuhang Lai
Siyuan Wang
Shengbin Yue
Zengfeng Huang
Xuanjing Huang
Zhongyu Wei
Re-assign community
ArXiv
PDF
HTML
Papers citing
"HAF-RM: A Hybrid Alignment Framework for Reward Model Training"
1 / 1 papers shown
Title
RMB: Comprehensively Benchmarking Reward Models in LLM Alignment
Enyu Zhou
Guodong Zheng
B. Wang
Zhiheng Xi
Shihan Dou
...
Yurong Mou
Rui Zheng
Tao Gui
Qi Zhang
Xuanjing Huang
ALM
54
14
0
13 Oct 2024
1