Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2412.12865
Cited By
Preference-Oriented Supervised Fine-Tuning: Favoring Target Model Over Aligned Large Language Models
17 December 2024
Yuchen Fan
Yuzhong Hong
Qiushi Wang
Junwei Bao
Hongfei Jiang
Yang Song
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Preference-Oriented Supervised Fine-Tuning: Favoring Target Model Over Aligned Large Language Models"
1 / 1 papers shown
Title
Unity RL Playground: A Versatile Reinforcement Learning Framework for Mobile Robots
Linqi Ye
Rankun Li
Xiaowen Hu
Jiayi Li
Boyang Xing
Yan Peng
Bin Liang
55
0
0
07 Mar 2025
1