Title |
---|
![]() Toward Optimal LLM Alignments Using Two-Player Games Rui Zheng Hongyi Guo Zhihan Liu Xiaoying Zhang Yuanshun Yao ...Tao Gui Qi Zhang Xuanjing Huang Hang Li Yang Liu |
![]() Unpacking DPO and PPO: Disentangling Best Practices for Learning from
Preference Feedback Hamish Ivison Yizhong Wang Jiacheng Liu Zeqiu Wu Valentina Pyatkin Nathan Lambert Noah A. Smith Yejin Choi Hannaneh Hajishirzi |
![]() Transfer Q Star: Principled Decoding for LLM Alignment Souradip Chakraborty Soumya Suvra Ghosal Ming Yin Dinesh Manocha Mengdi Wang Amrit Singh Bedi Furong Huang |
![]() Offline Regularised Reinforcement Learning for Large Language Models
Alignment Pierre Harvey Richemond Yunhao Tang Daniel Guo Daniele Calandriello M. G. Azar ...Gil Shamir Rishabh Joshi Tianqi Liu Rémi Munos Bilal Piot |
![]() Online Merging Optimizers for Boosting Rewards and Mitigating Tax in
Alignment Keming Lu Bowen Yu Fei Huang Yang Fan Runji Lin Chang Zhou |