ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.00034
42
0

MergeIT: From Selection to Merging for Efficient Instruction Tuning

25 February 2025
Hongyi Cai
Yuqian Fu
Hongming Fu
Bo Zhao
    MoMe
ArXivPDFHTML
Abstract

Instruction tuning is crucial for optimizing Large Language Models (LLMs), yet mainstream data selection methods heavily rely on LLMs as instruction quality scorers, leading to high computational costs and reduced data diversity. To address these limitations, we propose MergeIT, a novel LLM-based Merging strategy for better Instruction Tuning that shifts the focus from selection to synthesis. MergeIT operates in two stages: first, topic-aware filtering clusters and refines the dataset, preserving diversity while eliminating redundancy without relying on LLM-based scoring. Second, LLM-based merging synthesizes semantically similar instructions into more informative and compact training data, enhancing data richness while further reducing dataset size. Experimental results demonstrate that MergeIT enables efficient, diverse, and scalable instruction selection and synthesis, establishing LLM-based merging as a promising alternative to conventional scoring-based selection methods for instruction tuning. Our source code and datasets are now available atthis https URL

View on arXiv
@article{cai2025_2503.00034,
  title={ MergeIT: From Selection to Merging for Efficient Instruction Tuning },
  author={ Hongyi Cai and Yuqian Fu and Hongming Fu and Bo Zhao },
  journal={arXiv preprint arXiv:2503.00034},
  year={ 2025 }
}
Comments on this paper