ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.23714
50
0

Building Instruction-Tuning Datasets from Human-Written Instructions with Open-Weight Large Language Models

31 March 2025
Youmi Ma
Sakae Mizuki
Kazuki Fujii
Taishi Nakamura
Masanari Ohi
Hinari Shimada
Taihei Shiotani
Koshiro Saito
Koki Maeda
Kakeru Hattori
Takumi Okamoto
Shigeki Ishida
Rio Yokota
Hiroya Takamura
Naoaki Okazaki
    ALM
ArXivPDFHTML
Abstract

Instruction tuning is crucial for enabling Large Language Models (LLMs) to solve real-world tasks. Prior work has shown the effectiveness of instruction-tuning data synthesized solely from LLMs, raising a fundamental question: Do we still need human-originated signals for instruction tuning? This work answers the question affirmatively: we build state-of-the-art instruction-tuning datasets sourced from human-written instructions, by simply pairing them with LLM-generated responses. LLMs fine-tuned on our datasets consistently outperform those fine-tuned on existing ones. Our data construction approach can be easily adapted to other languages; we build datasets for Japanese and confirm that LLMs tuned with our data reach state-of-the-art performance. Analyses suggest that instruction-tuning in a new language allows LLMs to follow instructions, while the tuned models exhibit a notable lack of culture-specific knowledge in that language. The datasets and fine-tuned models will be publicly available. Our datasets, synthesized with open-weight LLMs, are openly distributed under permissive licenses, allowing for diverse use cases.

View on arXiv
@article{ma2025_2503.23714,
  title={ Building Instruction-Tuning Datasets from Human-Written Instructions with Open-Weight Large Language Models },
  author={ Youmi Ma and Sakae Mizuki and Kazuki Fujii and Taishi Nakamura and Masanari Ohi and Hinari Shimada and Taihei Shiotani and Koshiro Saito and Koki Maeda and Kakeru Hattori and Takumi Okamoto and Shigeki Ishida and Rio Yokota and Hiroya Takamura and Naoaki Okazaki },
  journal={arXiv preprint arXiv:2503.23714},
  year={ 2025 }
}
Comments on this paper