ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.15592
49
0

Generalizing From Short to Long: Effective Data Synthesis for Long-Context Instruction Tuning

24 February 2025
Wenhao Zhu
Pinzhen Chen
Hanxu Hu
Shujian Huang
Fei Yuan
Jiajun Chen
Alexandra Birch
    SyDa
ArXivPDFHTML
Abstract

Long-context modelling for large language models (LLMs) has been a key area of recent research because many real world use cases require reasoning over longer inputs such as documents. The focus of research into modelling long context has been on how to model position and there has been little investigation into other important aspects of language modelling such as instruction tuning. Long context training examples are challenging and expensive to create and use. In this paper, we investigate how to design instruction data for the post-training phase of a long context pre-trained model: how much and what type of context is needed for optimal and efficient post-training. Our controlled study reveals that models instruction-tuned on short contexts can effectively generalize to longer ones, while also identifying other critical factors such as instruction difficulty and context composition. Based on these findings, we propose context synthesis, a novel data synthesis framework that leverages off-the-shelf LLMs to generate extended background contexts for high-quality instruction-answer pairs. Experiment results on the document-level benchmark (LongBench) demonstrate that our proposed approach outperforms previous instruction synthesis approaches and comes close to the performance of human-annotated long-context instruction data. The project will be available at:this https URL.

View on arXiv
@article{zhu2025_2502.15592,
  title={ Generalizing From Short to Long: Effective Data Synthesis for Long-Context Instruction Tuning },
  author={ Wenhao Zhu and Pinzhen Chen and Hanxu Hu and Shujian Huang and Fei Yuan and Jiajun Chen and Alexandra Birch },
  journal={arXiv preprint arXiv:2502.15592},
  year={ 2025 }
}
Comments on this paper