ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.04833
  4. Cited By
Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for
  Instruction Fine-Tuning

Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning

7 February 2024
Hao Zhao
Maksym Andriushchenko
Francesco Croce
Nicolas Flammarion
    ALM
ArXivPDFHTML

Papers citing "Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning"

10 / 10 papers shown
Title
Mitigating Forgetting in LLM Supervised Fine-Tuning and Preference Learning
Mitigating Forgetting in LLM Supervised Fine-Tuning and Preference Learning
H. Fernando
Han Shen
Parikshit Ram
Yi Zhou
Horst Samulowitz
Nathalie Baracaldo
Tianyi Chen
CLL
44
2
0
20 Oct 2024
SFTMix: Elevating Language Model Instruction Tuning with Mixup Recipe
SFTMix: Elevating Language Model Instruction Tuning with Mixup Recipe
Yuxin Xiao
Shujian Zhang
Wenxuan Zhou
Marzyeh Ghassemi
Sanqiang Zhao
27
0
0
07 Oct 2024
From Distributional to Overton Pluralism: Investigating Large Language Model Alignment
From Distributional to Overton Pluralism: Investigating Large Language Model Alignment
Thom Lake
Eunsol Choi
Greg Durrett
31
9
0
25 Jun 2024
Unlocking Large Language Model's Planning Capabilities with Maximum Diversity Fine-tuning
Unlocking Large Language Model's Planning Capabilities with Maximum Diversity Fine-tuning
Wenjun Li
Changyu Chen
Pradeep Varakantham
29
2
0
15 Jun 2024
Is In-Context Learning Sufficient for Instruction Following in LLMs?
Is In-Context Learning Sufficient for Instruction Following in LLMs?
Hao Zhao
Maksym Andriushchenko
Francesco Croce
Nicolas Flammarion
42
12
0
30 May 2024
Self-Rewarding Language Models
Self-Rewarding Language Models
Weizhe Yuan
Richard Yuanzhe Pang
Kyunghyun Cho
Xian Li
Sainbayar Sukhbaatar
Jing Xu
Jason Weston
ReLM
SyDa
ALM
LRM
212
291
0
18 Jan 2024
Reflection-Tuning: Data Recycling Improves LLM Instruction-Tuning
Reflection-Tuning: Data Recycling Improves LLM Instruction-Tuning
Ming Li
Lichang Chen
Jiuhai Chen
Shwai He
Heng-Chiao Huang
Jiuxiang Gu
Tianyi Zhou
65
10
0
18 Oct 2023
Instruction Tuning with GPT-4
Instruction Tuning with GPT-4
Baolin Peng
Chunyuan Li
Pengcheng He
Michel Galley
Jianfeng Gao
SyDa
ALM
LM&MA
154
576
0
06 Apr 2023
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&Ro
LRM
AI4CE
ReLM
313
8,261
0
28 Jan 2022
1