Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2305.14387
Cited By
AlpacaFarm: A Simulation Framework for Methods that Learn from Human Feedback
22 May 2023
Yann Dubois
Xuechen Li
Rohan Taori
Tianyi Zhang
Ishaan Gulrajani
Jimmy Ba
Carlos Guestrin
Percy Liang
Tatsunori B. Hashimoto
ALM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"AlpacaFarm: A Simulation Framework for Methods that Learn from Human Feedback"
50 / 451 papers shown
Title
Dialectical Alignment: Resolving the Tension of 3H and Security Threats of LLMs
Shu Yang
Jiayuan Su
Han Jiang
Mengdi Li
Keyuan Cheng
Muhammad Asif Ali
Lijie Hu
Di Wang
16
5
0
30 Mar 2024
Using LLMs to Model the Beliefs and Preferences of Targeted Populations
Keiichi Namikoshi
Alexandre L. S. Filipowicz
David A. Shamma
Rumen Iliev
Candice L Hogan
Nikos Aréchiga
58
5
0
29 Mar 2024
Fine-Tuning Language Models with Reward Learning on Policy
Hao Lang
Fei Huang
Yongbin Li
ALM
27
4
0
28 Mar 2024
Disentangling Length from Quality in Direct Preference Optimization
Ryan Park
Rafael Rafailov
Stefano Ermon
Chelsea Finn
ALM
34
102
0
28 Mar 2024
The N+ Implementation Details of RLHF with PPO: A Case Study on TL;DR Summarization
Shengyi Huang
Michael Noukhovitch
Arian Hosseini
Kashif Rasul
Weixun Wang
Lewis Tunstall
VLM
19
31
0
24 Mar 2024
RewardBench: Evaluating Reward Models for Language Modeling
Nathan Lambert
Valentina Pyatkin
Jacob Morrison
Lester James Validad Miranda
Bill Yuchen Lin
...
Sachin Kumar
Tom Zick
Yejin Choi
Noah A. Smith
Hanna Hajishirzi
ALM
74
210
0
20 Mar 2024
Toward Sustainable GenAI using Generation Directives for Carbon-Friendly Large Language Model Inference
Baolin Li
Yankai Jiang
V. Gadepally
Devesh Tiwari
27
14
0
19 Mar 2024
RankPrompt: Step-by-Step Comparisons Make Language Models Better Reasoners
Chi Hu
Yuan Ge
Xiangnan Ma
Hang Cao
Qiang Li
Yonghua Yang
Tong Xiao
Jingbo Zhu
ReLM
ELM
LRM
ALM
37
9
0
19 Mar 2024
Scaling Data Diversity for Fine-Tuning Language Models in Human Alignment
Feifan Song
Bowen Yu
Hao Lang
Haiyang Yu
Fei Huang
Houfeng Wang
Yongbin Li
ALM
33
11
0
17 Mar 2024
Recurrent Drafter for Fast Speculative Decoding in Large Language Models
Aonan Zhang
Chong-Jun Wang
Yi Wang
Xuanyu Zhang
Yunfei Cheng
26
15
0
14 Mar 2024
Eyes Closed, Safety On: Protecting Multimodal LLMs via Image-to-Text Transformation
Yunhao Gou
Kai Chen
Zhili Liu
Lanqing Hong
Hang Xu
Zhenguo Li
Dit-Yan Yeung
James T. Kwok
Yu Zhang
MLLM
35
37
0
14 Mar 2024
Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision
Zhiqing Sun
Longhui Yu
Yikang Shen
Weiyang Liu
Yiming Yang
Sean Welleck
Chuang Gan
23
50
0
14 Mar 2024
HRLAIF: Improvements in Helpfulness and Harmlessness in Open-domain Reinforcement Learning From AI Feedback
Ang Li
Qiugen Xiao
Peng Cao
Jian Tang
Yi Yuan
...
Weidong Guo
Yukang Gan
Jeffrey Xu Yu
D. Wang
Ying Shan
VLM
ALM
33
10
0
13 Mar 2024
ALaRM: Align Language Models via Hierarchical Rewards Modeling
Yuhang Lai
Siyuan Wang
Shujun Liu
Xuanjing Huang
Zhongyu Wei
16
4
0
11 Mar 2024
Teaching Large Language Models to Reason with Reinforcement Learning
Alex Havrilla
Yuqing Du
Sharath Chandra Raparthy
Christoforos Nalmpantis
Jane Dwivedi-Yu
Maksym Zhuravinskyi
Eric Hambro
Sainbayar Sukhbaatar
Roberta Raileanu
ReLM
LRM
29
67
0
07 Mar 2024
KIWI: A Dataset of Knowledge-Intensive Writing Instructions for Answering Research Questions
Fangyuan Xu
Kyle Lo
Luca Soldaini
Bailey Kuehl
Eunsol Choi
David Wadden
29
6
0
06 Mar 2024
Reliable, Adaptable, and Attributable Language Models with Retrieval
Akari Asai
Zexuan Zhong
Danqi Chen
Pang Wei Koh
Luke Zettlemoyer
Hanna Hajishirzi
Wen-tau Yih
KELM
RALM
36
53
0
05 Mar 2024
Design2Code: Benchmarking Multimodal Code Generation for Automated Front-End Engineering
Chenglei Si
Yanzhe Zhang
Zhengyuan Yang
Zhengyuan Yang
Ruibo Liu
Diyi Yang
14
1
0
05 Mar 2024
DMoERM: Recipes of Mixture-of-Experts for Effective Reward Modeling
Shanghaoran Quan
MoE
OffRL
41
7
0
02 Mar 2024
FOFO: A Benchmark to Evaluate LLMs' Format-Following Capability
Congying Xia
Chen Xing
Jiangshu Du
Xinyi Yang
Yihao Feng
Ran Xu
Wenpeng Yin
Caiming Xiong
ALM
19
39
0
28 Feb 2024
Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation
Yuan Ge
Yilun Liu
Chi Hu
Weibin Meng
Shimin Tao
Xiaofeng Zhao
Hongxia Ma
Li Zhang
Hao Yang
Tong Xiao
ALM
27
24
0
28 Feb 2024
Prediction-Powered Ranking of Large Language Models
Ivi Chatzi
Eleni Straitouri
Suhas Thejaswi
Manuel Gomez Rodriguez
ALM
24
5
0
27 Feb 2024
Benchmarking Data Science Agents
Yuge Zhang
Qiyang Jiang
Xingyu Han
Nan Chen
Yuqing Yang
Kan Ren
ELM
20
9
0
27 Feb 2024
SelectIT: Selective Instruction Tuning for LLMs via Uncertainty-Aware Self-Reflection
Liangxin Liu
Xuebo Liu
Derek F. Wong
Dongfang Li
Ziyi Wang
Baotian Hu
Min Zhang
45
16
0
26 Feb 2024
KIEval: A Knowledge-grounded Interactive Evaluation Framework for Large Language Models
Zhuohao Yu
Chang Gao
Wenjin Yao
Yidong Wang
Wei Ye
Jindong Wang
Xing Xie
Yue Zhang
Shikun Zhang
32
20
0
23 Feb 2024
Zero-shot cross-lingual transfer in instruction tuning of large language models
Nadezhda Chirkova
Vassilina Nikoulina
LRM
38
3
0
22 Feb 2024
MT-Bench-101: A Fine-Grained Benchmark for Evaluating Large Language Models in Multi-Turn Dialogues
Ge Bai
Jie Liu
Xingyuan Bu
Yancheng He
Jiaheng Liu
...
Zhuoran Lin
Wenbo Su
Tiezheng Ge
Bo Zheng
Wanli Ouyang
ELM
LM&MA
30
68
0
22 Feb 2024
INSTRUCTIR: A Benchmark for Instruction Following of Information Retrieval Models
Hanseok Oh
Hyunji Lee
Seonghyeon Ye
Haebin Shin
Hansol Jang
Changwook Jun
Minjoon Seo
33
19
0
22 Feb 2024
Privacy-Preserving Instructions for Aligning Large Language Models
Da Yu
Peter Kairouz
Sewoong Oh
Zheng Xu
32
17
0
21 Feb 2024
Dynamic Evaluation of Large Language Models by Meta Probing Agents
Kaijie Zhu
Jindong Wang
Qinlin Zhao
Ruochen Xu
Xing Xie
40
30
0
21 Feb 2024
OMGEval: An Open Multilingual Generative Evaluation Benchmark for Large Language Models
Yang Janet Liu
Meng Xu
Shuo Wang
Liner Yang
Haoyu Wang
...
Cunliang Kong
Yun-Nung Chen
Yang Liu
Maosong Sun
Erhong Yang
ELM
LRM
36
1
0
21 Feb 2024
RefuteBench: Evaluating Refuting Instruction-Following for Large Language Models
Jianhao Yan
Yun Luo
Yue Zhang
ALM
LRM
28
6
0
21 Feb 2024
Large Language Models for Data Annotation: A Survey
Zhen Tan
Dawei Li
Song Wang
Alimohammad Beigi
Bohan Jiang
Amrita Bhattacharjee
Mansooreh Karami
Jundong Li
Lu Cheng
Huan Liu
SyDa
42
44
0
21 Feb 2024
Healthcare Copilot: Eliciting the Power of General LLMs for Medical Consultation
Zhiyao Ren
Yibing Zhan
Baosheng Yu
Liang Ding
Dacheng Tao
LM&MA
32
12
0
20 Feb 2024
Bayesian Reward Models for LLM Alignment
Adam X. Yang
Maxime Robeyns
Thomas Coste
Zhengyan Shi
Jun Wang
Haitham Bou-Ammar
Laurence Aitchison
32
17
0
20 Feb 2024
IMBUE: Improving Interpersonal Effectiveness through Simulation and Just-in-time Feedback with Human-Language Model Interaction
Inna Wanyin Lin
Ashish Sharma
Christopher Rytting
Adam S. Miner
Jina Suh
Tim Althoff
19
10
0
19 Feb 2024
Revisiting Knowledge Distillation for Autoregressive Language Models
Qihuang Zhong
Liang Ding
Li Shen
Juhua Liu
Bo Du
Dacheng Tao
KELM
39
15
0
19 Feb 2024
ROSE Doesn't Do That: Boosting the Safety of Instruction-Tuned Large Language Models with Reverse Prompt Contrastive Decoding
Qihuang Zhong
Liang Ding
Juhua Liu
Bo Du
Dacheng Tao
LM&MA
37
22
0
19 Feb 2024
Multi-Task Inference: Can Large Language Models Follow Multiple Instructions at Once?
Guijin Son
Sangwon Baek
Sangdae Nam
Ilgyun Jeong
Seungone Kim
ELM
LRM
19
13
0
18 Feb 2024
Dissecting Human and LLM Preferences
Junlong Li
Fan Zhou
Shichao Sun
Yikai Zhang
Hai Zhao
Pengfei Liu
ALM
8
5
0
17 Feb 2024
KnowTuning: Knowledge-aware Fine-tuning for Large Language Models
Yougang Lyu
Lingyong Yan
Shuaiqiang Wang
Haibo Shi
Dawei Yin
Pengjie Ren
Zhumin Chen
Maarten de Rijke
Zhaochun Ren
16
5
0
17 Feb 2024
Can LLMs Speak For Diverse People? Tuning LLMs via Debate to Generate Controllable Controversial Statements
Ming Li
Jiuhai Chen
Lichang Chen
Tianyi Zhou
66
17
0
16 Feb 2024
DataDreamer: A Tool for Synthetic Data Generation and Reproducible LLM Workflows
Ajay Patel
Colin Raffel
Chris Callison-Burch
SyDa
AI4CE
17
25
0
16 Feb 2024
Recovering the Pre-Fine-Tuning Weights of Generative Models
Eliahu Horwitz
Jonathan Kahana
Yedid Hoshen
45
9
0
15 Feb 2024
Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning
Ming Li
Lichang Chen
Jiuhai Chen
Shwai He
Jiuxiang Gu
Tianyi Zhou
13
50
0
15 Feb 2024
InfoRM: Mitigating Reward Hacking in RLHF via Information-Theoretic Reward Modeling
Yuchun Miao
Sen Zhang
Liang Ding
Rong Bao
Lefei Zhang
Dacheng Tao
22
12
0
14 Feb 2024
MaxMin-RLHF: Towards Equitable Alignment of Large Language Models with Diverse Human Preferences
Souradip Chakraborty
Jiahao Qiu
Hui Yuan
Alec Koppel
Furong Huang
Dinesh Manocha
Amrit Singh Bedi
Mengdi Wang
ALM
17
46
0
14 Feb 2024
Aya Model: An Instruction Finetuned Open-Access Multilingual Language Model
A. Ustun
Viraat Aryabumi
Zheng-Xin Yong
Wei-Yin Ko
Daniel D'souza
...
Shayne Longpre
Niklas Muennighoff
Marzieh Fadaee
Julia Kreutzer
Sara Hooker
ALM
ELM
SyDa
LRM
27
192
0
12 Feb 2024
ODIN: Disentangled Reward Mitigates Hacking in RLHF
Lichang Chen
Chen Zhu
Davit Soselia
Jiuhai Chen
Tianyi Zhou
Tom Goldstein
Heng-Chiao Huang
M. Shoeybi
Bryan Catanzaro
AAML
42
51
0
11 Feb 2024
Online Iterative Reinforcement Learning from Human Feedback with General Preference Model
Chen Ye
Wei Xiong
Yuheng Zhang
Nan Jiang
Tong Zhang
OffRL
36
9
0
11 Feb 2024
Previous
1
2
3
...
10
5
6
7
8
9
Next