ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.18290
  4. Cited By
Direct Preference Optimization: Your Language Model is Secretly a Reward
  Model

Direct Preference Optimization: Your Language Model is Secretly a Reward Model

29 May 2023
Rafael Rafailov
Archit Sharma
E. Mitchell
Stefano Ermon
Christopher D. Manning
Chelsea Finn
    ALM
ArXivPDFHTML

Papers citing "Direct Preference Optimization: Your Language Model is Secretly a Reward Model"

50 / 641 papers shown
Title
Simplify RLHF as Reward-Weighted SFT: A Variational Method
Simplify RLHF as Reward-Weighted SFT: A Variational Method
Yuhao Du
Z. Li
Pengyu Cheng
Zhihong Chen
Yuejiao Xie
Xiang Wan
Anningzhe Gao
35
1
0
20 Feb 2025
Diffusion Models in Recommendation Systems: A Survey
Diffusion Models in Recommendation Systems: A Survey
Ting-Ruen Wei
Yi Fang
77
2
0
20 Feb 2025
Drift: Decoding-time Personalized Alignments with Implicit User Preferences
Drift: Decoding-time Personalized Alignments with Implicit User Preferences
Minbeom Kim
Kang-il Lee
Seongho Joo
Hwaran Lee
Thibaut Thonet
Kyomin Jung
AI4TS
107
1
0
20 Feb 2025
Faster WIND: Accelerating Iterative Best-of-$N$ Distillation for LLM Alignment
Faster WIND: Accelerating Iterative Best-of-NNN Distillation for LLM Alignment
Tong Yang
Jincheng Mei
H. Dai
Zixin Wen
Shicong Cen
Dale Schuurmans
Yuejie Chi
Bo Dai
36
4
0
20 Feb 2025
Symmetrical Visual Contrastive Optimization: Aligning Vision-Language Models with Minimal Contrastive Images
Symmetrical Visual Contrastive Optimization: Aligning Vision-Language Models with Minimal Contrastive Images
Shengguang Wu
Fan-Yun Sun
Kaiyue Wen
Nick Haber
VLM
75
0
0
20 Feb 2025
Can a Single Model Master Both Multi-turn Conversations and Tool Use? CoALM: A Unified Conversational Agentic Language Model
Can a Single Model Master Both Multi-turn Conversations and Tool Use? CoALM: A Unified Conversational Agentic Language Model
Emre Can Acikgoz
Jeremiah Greer
Akul Datta
Ze Yang
William Zeng
Oussama Elachqar
Emmanouil Koukoumidis
Dilek Hakkani-Tür
Gökhan Tür
LLMAG
97
3
0
20 Feb 2025
UPCORE: Utility-Preserving Coreset Selection for Balanced Unlearning
UPCORE: Utility-Preserving Coreset Selection for Balanced Unlearning
Vaidehi Patil
Elias Stengel-Eskin
Mohit Bansal
MU
CLL
73
2
0
20 Feb 2025
Oreo: A Plug-in Context Reconstructor to Enhance Retrieval-Augmented Generation
Oreo: A Plug-in Context Reconstructor to Enhance Retrieval-Augmented Generation
Sha Li
Naren Ramakrishnan
RALM
KELM
147
1
0
18 Feb 2025
Savaal: Scalable Concept-Driven Question Generation to Enhance Human Learning
Savaal: Scalable Concept-Driven Question Generation to Enhance Human Learning
Kimia Noorbakhsh
Joseph Chandler
Pantea Karimi
M. Alizadeh
H. Balakrishnan
LRM
44
1
0
18 Feb 2025
Multi-Step Alignment as Markov Games: An Optimistic Online Gradient Descent Approach with Convergence Guarantees
Multi-Step Alignment as Markov Games: An Optimistic Online Gradient Descent Approach with Convergence Guarantees
Yongtao Wu
Luca Viano
Yihang Chen
Zhenyu Zhu
Kimon Antonakopoulos
Quanquan Gu
V. Cevher
49
0
0
18 Feb 2025
Policy-to-Language: Train LLMs to Explain Decisions with Flow-Matching Generated Rewards
Policy-to-Language: Train LLMs to Explain Decisions with Flow-Matching Generated Rewards
Xinyi Yang
Liang Zeng
Heng Dong
C. Yu
X. Wu
H. Yang
Yu Wang
Milind Tambe
Tonghan Wang
68
2
0
18 Feb 2025
Which Retain Set Matters for LLM Unlearning? A Case Study on Entity Unlearning
Which Retain Set Matters for LLM Unlearning? A Case Study on Entity Unlearning
Hwan Chang
Hwanhee Lee
MU
42
0
0
17 Feb 2025
Why Vision Language Models Struggle with Visual Arithmetic? Towards Enhanced Chart and Geometry Understanding
Why Vision Language Models Struggle with Visual Arithmetic? Towards Enhanced Chart and Geometry Understanding
Kung-Hsiang Huang
Can Qin
Haoyi Qiu
Philippe Laban
Shafiq R. Joty
Caiming Xiong
C. Wu
VLM
79
1
0
17 Feb 2025
Preference Optimization for Reasoning with Pseudo Feedback
Preference Optimization for Reasoning with Pseudo Feedback
Fangkai Jiao
Geyang Guo
Xingxing Zhang
Nancy F. Chen
Shafiq R. Joty
Furu Wei
LRM
99
9
0
17 Feb 2025
LeDex: Training LLMs to Better Self-Debug and Explain Code
LeDex: Training LLMs to Better Self-Debug and Explain Code
Nan Jiang
Xiaopeng Li
Shiqi Wang
Qiang Zhou
Soneya Binta Hossain
Baishakhi Ray
Varun Kumar
Xiaofei Ma
Anoop Deoras
LRM
89
11
0
17 Feb 2025
A Critical Look At Tokenwise Reward-Guided Text Generation
A Critical Look At Tokenwise Reward-Guided Text Generation
Ahmad Rashid
Ruotian Wu
Julia Grosse
Agustinus Kristiadi
Pascal Poupart
OffRL
68
0
0
17 Feb 2025
video-SALMONN-o1: Reasoning-enhanced Audio-visual Large Language Model
video-SALMONN-o1: Reasoning-enhanced Audio-visual Large Language Model
Guangzhi Sun
Yudong Yang
Jimin Zhuang
Changli Tang
Y. Li
W. Li
Z. Ma
Chao Zhang
LRM
MLLM
VLM
64
2
0
17 Feb 2025
Diversity-Oriented Data Augmentation with Large Language Models
Diversity-Oriented Data Augmentation with Large Language Models
Zaitian Wang
Jinghan Zhang
Xinhao Zhang
Kunpeng Liu
Pengfei Wang
Yuanchun Zhou
78
1
0
17 Feb 2025
Following the Autoregressive Nature of LLM Embeddings via Compression and Alignment
Following the Autoregressive Nature of LLM Embeddings via Compression and Alignment
Jingcheng Deng
Zhongtao Jiang
Liang Pang
Liwei Chen
Kun Xu
Zihao Wei
Huawei Shen
Xueqi Cheng
49
1
0
17 Feb 2025
ReLearn: Unlearning via Learning for Large Language Models
ReLearn: Unlearning via Learning for Large Language Models
Haoming Xu
Ningyuan Zhao
Liming Yang
Sendong Zhao
Shumin Deng
Mengru Wang
Bryan Hooi
Nay Oo
H. Chen
N. Zhang
KELM
CLL
MU
71
0
0
16 Feb 2025
Leveraging Constrained Monte Carlo Tree Search to Generate Reliable Long Chain-of-Thought for Mathematical Reasoning
Leveraging Constrained Monte Carlo Tree Search to Generate Reliable Long Chain-of-Thought for Mathematical Reasoning
Qingwen Lin
Boyan Xu
Zijian Li
Z. Hao
Keli Zhang
Ruichu Cai
LRM
41
2
0
16 Feb 2025
BoT: Breaking Long Thought Processes of o1-like Large Language Models through Backdoor Attack
BoT: Breaking Long Thought Processes of o1-like Large Language Models through Backdoor Attack
Zihao Zhu
Hongbao Zhang
Mingda Zhang
Ruotong Wang
Guanzong Wu
Ke Xu
Baoyuan Wu
AAML
LRM
54
4
0
16 Feb 2025
LogiDynamics: Unraveling the Dynamics of Logical Inference in Large Language Model Reasoning
LogiDynamics: Unraveling the Dynamics of Logical Inference in Large Language Model Reasoning
Tianshi Zheng
Jiayang Cheng
Chunyang Li
Haochen Shi
Z. Wang
Jiaxin Bai
Y. Song
Ginny Y. Wong
Simon See
LRM
46
2
0
16 Feb 2025
Preference learning made easy: Everything should be understood through win rate
Preference learning made easy: Everything should be understood through win rate
Lily H. Zhang
Rajesh Ranganath
80
0
0
14 Feb 2025
Step-Video-T2V Technical Report: The Practice, Challenges, and Future of Video Foundation Model
Step-Video-T2V Technical Report: The Practice, Challenges, and Future of Video Foundation Model
Guoqing Ma
Haoyang Huang
K. Yan
L. Chen
Nan Duan
...
Y. Wang
Yuanwei Lu
Yu-Cheng Chen
Yu-Juan Luo
Y. Luo
DiffM
VGen
147
17
0
14 Feb 2025
Large Language Diffusion Models
Large Language Diffusion Models
Shen Nie
Fengqi Zhu
Zebin You
Xiaolu Zhang
Jingyang Ou
Jun Hu
Jun Zhou
Yankai Lin
Ji-Rong Wen
Chongxuan Li
98
12
0
14 Feb 2025
KKA: Improving Vision Anomaly Detection through Anomaly-related Knowledge from Large Language Models
KKA: Improving Vision Anomaly Detection through Anomaly-related Knowledge from Large Language Models
Dong Chen
Zhengqing Hu
Peiguang Fan
Yueting Zhuang
Yafei Li
Qidong Liu
Xiaoheng Jiang
Mingliang Xu
VLM
46
0
0
14 Feb 2025
Direct Preference Optimization-Enhanced Multi-Guided Diffusion Model for Traffic Scenario Generation
Direct Preference Optimization-Enhanced Multi-Guided Diffusion Model for Traffic Scenario Generation
Seungjun Yu
Kisung Kim
Daejung Kim
Haewook Han
Jinhan Lee
55
0
0
14 Feb 2025
Steel-LLM:From Scratch to Open Source -- A Personal Journey in Building a Chinese-Centric LLM
Steel-LLM:From Scratch to Open Source -- A Personal Journey in Building a Chinese-Centric LLM
Qingshui Gu
Shu Li
Tianyu Zheng
Zhaoxiang Zhang
129
0
0
10 Feb 2025
ReasonFlux: Hierarchical LLM Reasoning via Scaling Thought Templates
ReasonFlux: Hierarchical LLM Reasoning via Scaling Thought Templates
L. Yang
Zhaochen Yu
Bin Cui
Mengdi Wang
ReLM
LRM
AI4CE
96
10
0
10 Feb 2025
Learning to Substitute Words with Model-based Score Ranking
Learning to Substitute Words with Model-based Score Ranking
Hongye Liu
Ricardo Henao
41
0
0
09 Feb 2025
Dual Caption Preference Optimization for Diffusion Models
Amir Saeidi
Yiran Luo
Agneet Chatterjee
Shamanthak Hegde
Bimsara Pathiraja
Yezhou Yang
Chitta Baral
DiffM
51
0
0
09 Feb 2025
Jailbreaking to Jailbreak
Jailbreaking to Jailbreak
Jeremy Kritz
Vaughn Robinson
Robert Vacareanu
Bijan Varjavand
Michael Choi
Bobby Gogov
Scale Red Team
Summer Yue
Willow Primack
Zifan Wang
109
0
0
09 Feb 2025
Design Considerations in Offline Preference-based RL
Design Considerations in Offline Preference-based RL
Alekh Agarwal
Christoph Dann
T. V. Marinov
OffRL
45
0
0
08 Feb 2025
ARIES: Stimulating Self-Refinement of Large Language Models by Iterative Preference Optimization
ARIES: Stimulating Self-Refinement of Large Language Models by Iterative Preference Optimization
Yongcheng Zeng
Xinyu Cui
Xuanfa Jin
Guoqing Liu
Zexu Sun
...
Dong Li
Ning Yang
Jianye Hao
H. Zhang
J. Wang
LRM
LLMAG
78
1
0
08 Feb 2025
Koel-TTS: Enhancing LLM based Speech Generation with Preference Alignment and Classifier Free Guidance
Koel-TTS: Enhancing LLM based Speech Generation with Preference Alignment and Classifier Free Guidance
Shehzeen Samarah Hussain
Paarth Neekhara
Xuesong Yang
Edresson Casanova
Subhankar Ghosh
Mikyas T. Desta
Roy Fejgin
Rafael Valle
Jason Chun Lok Li
59
2
0
07 Feb 2025
Can LLMs Rank the Harmfulness of Smaller LLMs? We are Not There Yet
Can LLMs Rank the Harmfulness of Smaller LLMs? We are Not There Yet
Berk Atil
Vipul Gupta
Sarkar Snigdha Sarathi Das
R. Passonneau
91
0
0
07 Feb 2025
The Best Instruction-Tuning Data are Those That Fit
The Best Instruction-Tuning Data are Those That Fit
Dylan Zhang
Qirun Dai
Hao Peng
ALM
115
3
0
06 Feb 2025
CTR-Driven Advertising Image Generation with Multimodal Large Language Models
CTR-Driven Advertising Image Generation with Multimodal Large Language Models
Xingye Chen
Wei Feng
Zhenbang Du
Weizhen Wang
Y. Chen
...
Jingping Shao
Yuanjie Shao
Xinge You
Changxin Gao
Nong Sang
OffRL
32
1
0
05 Feb 2025
Generative Psycho-Lexical Approach for Constructing Value Systems in Large Language Models
Generative Psycho-Lexical Approach for Constructing Value Systems in Large Language Models
Haoran Ye
T. Zhang
Yuhang Xie
Liyuan Zhang
Yuanyi Ren
Xin Zhang
Guojie Song
PILM
74
0
0
04 Feb 2025
The Battling Influencers Game: Nash Equilibria Structure of a Potential Game and Implications to Value Alignment
The Battling Influencers Game: Nash Equilibria Structure of a Potential Game and Implications to Value Alignment
Young Wu
Yancheng Zhu
Jin-Yi Cai
Xiaojin Zhu
94
0
0
03 Feb 2025
Understanding the Capabilities and Limitations of Weak-to-Strong Generalization
Understanding the Capabilities and Limitations of Weak-to-Strong Generalization
Wei Yao
Wenkai Yang
Z. Wang
Yankai Lin
Yong Liu
ELM
90
1
0
03 Feb 2025
GuardReasoner: Towards Reasoning-based LLM Safeguards
Yue Liu
Hongcheng Gao
Shengfang Zhai
Jun-Xiong Xia
Tianyi Wu
Zhiwei Xue
Y. Chen
Kenji Kawaguchi
Jiaheng Zhang
Bryan Hooi
AI4TS
LRM
129
13
0
30 Jan 2025
Think Smarter not Harder: Adaptive Reasoning with Inference Aware Optimization
Think Smarter not Harder: Adaptive Reasoning with Inference Aware Optimization
Zishun Yu
Tengyu Xu
Di Jin
Karthik Abinav Sankararaman
Yun He
...
Eryk Helenowski
Chen Zhu
Sinong Wang
Hao Ma
Han Fang
LRM
49
4
0
29 Jan 2025
COS(M+O)S: Curiosity and RL-Enhanced MCTS for Exploring Story Space via Language Models
COS(M+O)S: Curiosity and RL-Enhanced MCTS for Exploring Story Space via Language Models
Tobias Materzok
LRM
67
0
0
28 Jan 2025
DeSTA2: Developing Instruction-Following Speech Language Model Without Speech Instruction-Tuning Data
DeSTA2: Developing Instruction-Following Speech Language Model Without Speech Instruction-Tuning Data
Ke-Han Lu
Zhehuai Chen
Szu-Wei Fu
Chao-Han Huck Yang
Jagadeesh Balam
Boris Ginsburg
Yu-Te Wang
Hung-yi Lee
AuLLM
SyDa
99
5
0
28 Jan 2025
BoKDiff: Best-of-K Diffusion Alignment for Target-Specific 3D Molecule Generation
Ali Khodabandeh Yalabadi
Mehdi Yazdani-Jahromi
O. Garibay
33
0
0
28 Jan 2025
Learning to Summarize from LLM-generated Feedback
Learning to Summarize from LLM-generated Feedback
Hwanjun Song
Taewon Yun
Yuho Lee
Jihwan Oh
Gihun Lee
Jason (Jinglun) Cai
Hang Su
73
2
0
28 Jan 2025
Visual Generation Without Guidance
Huayu Chen
Kai Jiang
Kaiwen Zheng
Jianfei Chen
Hang Su
J. Zhu
55
0
0
28 Jan 2025
TimeHF: Billion-Scale Time Series Models Guided by Human Feedback
Yongzhi Qi
Hao Hu
Dazhou Lei
Jianshen Zhang
Zhengxin Shi
Yulin Huang
Zhengyu Chen
Xiaoming Lin
Zuo-jun Shen
AI4TS
AI4CE
39
1
0
28 Jan 2025
Previous
123456...111213
Next