120
v1v2 (latest)

Synergy over Discrepancy: A Partition-Based Approach to Multi-Domain LLM Fine-Tuning

Main:9 Pages
3 Figures
Bibliography:4 Pages
23 Tables
Appendix:16 Pages
Abstract

Large language models (LLMs) demonstrate impressive generalization abilities, yet adapting them effectively across multiple heterogeneous domains remains challenging due to inter-domain interference. To overcome this challenge, we propose a partition-based multi-stage fine-tuning framework designed to exploit inter-domain synergies while minimizing negative transfer. Our approach strategically partitions domains into subsets (stages) by balancing domain discrepancy, synergy, and model capacity constraints. We theoretically analyze the proposed framework and derive novel generalization bounds that justify our partitioning strategy. Extensive empirical evaluations on various language understanding tasks show that our method consistently outperforms state-of-the-art baselines.

View on arXiv
Comments on this paper