89
0
v1v2v3v4v5 (latest)

VersaTune: An Efficient Data Composition Framework for Training Multi-Capability LLMs

Abstract

As demonstrated by the proprietary Large Language Models (LLMs) such as GPT and Claude series, LLMs have the potential to achieve remarkable proficiency across a wide range of domains, including law, medicine, finance, science, code, etc., all within a single model. These capabilities are further augmented during the Supervised Fine-Tuning (SFT) phase. Despite their potential, existing work mainly focuses on domain-specific enhancements during fine-tuning, the challenge of which lies in catastrophic forgetting of knowledge across other domains. In this study, we introduce **VersaTune**, a novel data composition framework designed for enhancing LLMs' overall multi-domain capabilities during training. We begin with detecting the distribution of domain-specific knowledge within the base model, followed by the training data composition that aligns with the model's existing knowledge distribution. During the subsequent training process, domain weights are dynamically adjusted based on their learnable potential and forgetting degree. Experimental results indicate that VersaTune is effective in multi-domain fostering, with an improvement of 35.21\% in the overall multi-ability performances compared to uniform domain weights. Furthermore, we find that Qwen-2.5-32B + VersaTune even surpasses frontier models, including GPT-4o, Claude3.5-Sonnet and DeepSeek-V3 by 0.86\%, 4.76\% and 4.60\%. Additionally, in scenarios where flexible expansion of a specific domain is required, VersaTune reduces the performance degradation in other domains by 38.77\%, while preserving the training efficacy of the target domain.

View on arXiv
@article{lu2025_2411.11266,
  title={ VersaTune: An Efficient Data Composition Framework for Training Multi-Capability LLMs },
  author={ Keer Lu and Keshi Zhao and Zhuoran Zhang and Zheng Liang and Da Pan and Shusen Zhang and Xin Wu and Guosheng Dong and Bin Cui and Tengjiao Wang and Wentao Zhang },
  journal={arXiv preprint arXiv:2411.11266},
  year={ 2025 }
}
Main:8 Pages
14 Figures
Bibliography:4 Pages
7 Tables
Appendix:12 Pages
Comments on this paper