ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.15027
12
0

DistilQwen2.5: Industrial Practices of Training Distilled Open Lightweight Language Models

21 April 2025
Chengyu Wang
Junbing Yan
Yuanhao Yue
Jun Huang
ArXivPDFHTML
Abstract

Enhancing computational efficiency and reducing deployment costs for large language models (LLMs) have become critical challenges in various resource-constrained scenarios. In this work, we present DistilQwen2.5, a family of distilled, lightweight LLMs derived from the public Qwen2.5 models. These distilled models exhibit enhanced instruction-following capabilities compared to the original models based on a series of distillation techniques that incorporate knowledge from much larger LLMs. In our industrial practice, we first leverage powerful proprietary LLMs with varying capacities as multi-agent teachers to select, rewrite, and refine instruction-response pairs that are more suitable for student LLMs to learn. After standard fine-tuning, we further leverage a computationally efficient model fusion approach that enables student models to progressively integrate fine-grained hidden knowledge from their teachers. Experimental evaluations demonstrate that the distilled models possess significantly stronger capabilities than their original checkpoints. Additionally, we present use cases to illustrate the applications of our framework in real-world scenarios. To facilitate practical use, we have released all the DistilQwen2.5 models to the open-source community.

View on arXiv
@article{wang2025_2504.15027,
  title={ DistilQwen2.5: Industrial Practices of Training Distilled Open Lightweight Language Models },
  author={ Chengyu Wang and Junbing Yan and Yuanhao Yue and Jun Huang },
  journal={arXiv preprint arXiv:2504.15027},
  year={ 2025 }
}
Comments on this paper