ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.07282
21
0

RAISE: Reinforenced Adaptive Instruction Selection For Large Language Models

9 April 2025
Lv Qingsong
Yangning Li
Zihua Lan
Zishan Xu
Jiwei Tang
Yinghui Li
Wenhao Jiang
Hai-tao Zheng
Philip S. Yu
ArXivPDFHTML
Abstract

In the instruction fine-tuning of large language models (LLMs), it has become a consensus that a few high-quality instructions are superior to a large number of low-quality instructions. At present, many instruction selection methods have been proposed, but most of these methods select instruction based on heuristic quality metrics, and only consider data selection before training. These designs lead to insufficient optimization of instruction fine-tuning, and fixed heuristic indicators are often difficult to optimize for specific tasks. So we designed a dynamic, task-objective-driven instruction selection framework RAISE(Reinforenced Adaptive Instruction SElection), which incorporates the entire instruction fine-tuning process into optimization, selecting instruction at each step based on the expected impact of instruction on model performance improvement. Our approach is well interpretable and has strong task-specific optimization capabilities. By modeling dynamic instruction selection as a sequential decision-making process, we use RL to train our selection strategy. Extensive experiments and result analysis prove the superiority of our method compared with other instruction selection methods. Notably, RAISE achieves superior performance by updating only 1\% of the training steps compared to full-data training, demonstrating its efficiency and effectiveness.

View on arXiv
@article{qingsong2025_2504.07282,
  title={ RAISE: Reinforenced Adaptive Instruction Selection For Large Language Models },
  author={ Lv Qingsong and Yangning Li and Zihua Lan and Zishan Xu and Jiwei Tang and Yinghui Li and Wenhao Jiang and Hai-Tao Zheng and Philip S. Yu },
  journal={arXiv preprint arXiv:2504.07282},
  year={ 2025 }
}
Comments on this paper