ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.09320
  4. Cited By
ICDPO: Effectively Borrowing Alignment Capability of Others via
  In-context Direct Preference Optimization

ICDPO: Effectively Borrowing Alignment Capability of Others via In-context Direct Preference Optimization

14 February 2024
Feifan Song
Yuxuan Fan
Xin Zhang
Peiyi Wang
Houfeng Wang
ArXiv (abs)PDFHTMLHuggingFace (6 upvotes)

Papers citing "ICDPO: Effectively Borrowing Alignment Capability of Others via In-context Direct Preference Optimization"

7 / 7 papers shown
Title
IROTE: Human-like Traits Elicitation of Large Language Model via In-Context Self-Reflective Optimization
IROTE: Human-like Traits Elicitation of Large Language Model via In-Context Self-Reflective Optimization
Yuzhuo Bai
Shitong Duan
Muhua Huang
Jing Yao
Zhenghao Liu
Peng Zhang
Tun Lu
Xiaoyuan Yi
Maosong Sun
Xing Xie
120
1
0
12 Aug 2025
Self-Adaptive Cognitive Debiasing for Large Language Models in Decision-Making
Self-Adaptive Cognitive Debiasing for Large Language Models in Decision-Making
Yougang Lyu
Shijie Ren
Yue Feng
Zihan Wang
Zhongfu Chen
Zhaochun Ren
Maarten de Rijke
580
2
0
05 Apr 2025
Test-Time Preference Optimization: On-the-Fly Alignment via Iterative Textual Feedback
Test-Time Preference Optimization: On-the-Fly Alignment via Iterative Textual Feedback
Yafu Li
Xuyang Hu
Xiaoye Qu
Linjie Li
Yu Cheng
247
30
0
22 Jan 2025
Targeted Vaccine: Safety Alignment for Large Language Models against Harmful Fine-Tuning via Layer-wise Perturbation
Targeted Vaccine: Safety Alignment for Large Language Models against Harmful Fine-Tuning via Layer-wise PerturbationIEEE Transactions on Information Forensics and Security (IEEE TIFS), 2024
Guozhi Liu
Weiwei Lin
Tiansheng Huang
Ruichao Mo
Qi Mu
Li Shen
AAML
294
28
0
13 Oct 2024
Towards a Unified View of Preference Learning for Large Language Models:
  A Survey
Towards a Unified View of Preference Learning for Large Language Models: A Survey
Bofei Gao
Feifan Song
Yibo Miao
Zefan Cai
Zhiyong Yang
...
Houfeng Wang
Zhifang Sui
Peiyi Wang
Baobao Chang
Baobao Chang
336
16
0
04 Sep 2024
Direct Alignment of Language Models via Quality-Aware Self-Refinement
Direct Alignment of Language Models via Quality-Aware Self-Refinement
Runsheng Yu
Yong Wang
Xiaoqi Jiao
Youzhi Zhang
James T. Kwok
175
7
0
31 May 2024
KnowTuning: Knowledge-aware Fine-tuning for Large Language Models
KnowTuning: Knowledge-aware Fine-tuning for Large Language Models
Yougang Lyu
Lingyong Yan
Shuaiqiang Wang
Haibo Shi
D. Yin
Sudipta Singha Roy
Zhumin Chen
Maarten de Rijke
Zhaochun Ren
175
10
0
17 Feb 2024
1