Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2310.16271
Cited By
CycleAlign: Iterative Distillation from Black-box LLM to White-box Models for Better Human Alignment
25 October 2023
Jixiang Hong
Quan Tu
C. Chen
Xing Gao
Ji Zhang
Rui Yan
ALM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"CycleAlign: Iterative Distillation from Black-box LLM to White-box Models for Better Human Alignment"
5 / 5 papers shown
Title
DiverseDialogue: A Methodology for Designing Chatbots with Human-Like Diversity
Xiaoyu Lin
Xinkai Yu
Ankit Aich
Salvatore Giorgi
Lyle Ungar
ALM
40
0
0
30 Aug 2024
High-Dimension Human Value Representation in Large Language Models
Samuel Cahyawijaya
Delong Chen
Yejin Bang
Leila Khalatbari
Bryan Wilie
Ziwei Ji
Etsuko Ishii
Pascale Fung
63
5
0
11 Apr 2024
Aligning Large Language Models through Synthetic Feedback
Sungdong Kim
Sanghwan Bae
Jamin Shin
Soyoung Kang
Donghyun Kwak
Kang Min Yoo
Minjoon Seo
ALM
SyDa
73
67
0
23 May 2023
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
303
11,909
0
04 Mar 2022
ZeRO-Offload: Democratizing Billion-Scale Model Training
Jie Ren
Samyam Rajbhandari
Reza Yazdani Aminabadi
Olatunji Ruwase
Shuangyang Yang
Minjia Zhang
Dong Li
Yuxiong He
MoE
160
413
0
18 Jan 2021
1