Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2406.11474
Cited By
How Far Can In-Context Alignment Go? Exploring the State of In-Context Alignment
17 June 2024
Heyan Huang
Yinghao Li
Huashan Sun
Yu Bai
Yang Gao
Re-assign community
ArXiv
PDF
HTML
Papers citing
"How Far Can In-Context Alignment Go? Exploring the State of In-Context Alignment"
8 / 8 papers shown
Title
Research on Superalignment Should Advance Now with Parallel Optimization of Competence and Conformity
HyunJin Kim
Xiaoyuan Yi
Jing Yao
Muhua Huang
Jinyeong Bak
James Evans
Xing Xie
29
0
0
08 Mar 2025
Controllable Safety Alignment: Inference-Time Adaptation to Diverse Safety Requirements
Jingyu Zhang
Ahmed Elgohary
Ahmed Magooda
Daniel Khashabi
Benjamin Van Durme
33
2
0
11 Oct 2024
AI Safety in Generative AI Large Language Models: A Survey
Jaymari Chua
Yun Yvonna Li
Shiyi Yang
Chen Wang
Lina Yao
LM&MA
29
12
0
06 Jul 2024
Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations?
Zorik Gekhman
G. Yona
Roee Aharoni
Matan Eyal
Amir Feder
Roi Reichart
Jonathan Herzig
46
98
0
09 May 2024
A Closer Look at the Limitations of Instruction Tuning
Sreyan Ghosh
Chandra Kiran Reddy Evuru
Sonal Kumar
Reddy Evuru
Deepali Aneja
Zeyu Jin
R. Duraiswami
Dinesh Manocha
ALM
69
14
0
03 Feb 2024
Parameter-Efficient Finetuning for Robust Continual Multilingual Learning
Kartikeya Badola
Shachi Dave
Partha P. Talukdar
CLL
KELM
28
7
0
14 Sep 2022
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
ZeRO-Offload: Democratizing Billion-Scale Model Training
Jie Ren
Samyam Rajbhandari
Reza Yazdani Aminabadi
Olatunji Ruwase
Shuangyang Yang
Minjia Zhang
Dong Li
Yuxiong He
MoE
155
399
0
18 Jan 2021
1