Sens-Merging: Sensitivity-Guided Parameter Balancing for Merging Large Language Models

Recent advances in large language models have led to numerous task-specialized fine-tuned variants, creating a need for efficient model merging techniques that preserve specialized capabilities while avoiding costly retraining. While existing task vector-based merging methods show promise, they typically apply uniform coefficients across all parameters, overlooking varying parameter importance both within and across tasks. We present Sens-Merging, a sensitivity-guided coefficient adjustment method that enhances existing model merging techniques by operating at both task-specific and cross-task levels. Our method analyzes parameter sensitivity within individual tasks and evaluates cross-task transferability to determine optimal merging coefficients. Extensive experiments on Mistral 7B and LLaMA2-7B/13B models demonstrate that Sens-Merging significantly improves performance across general knowledge, mathematical reasoning, and code generation tasks. Notably, when combined with existing merging techniques, our method enables merged models to outperform specialized fine-tuned models, particularly in code generation tasks. Our findings reveal important trade-offs between task-specific and cross-task scalings, providing insights for future model merging strategies.
View on arXiv@article{liu2025_2502.12420, title={ Sens-Merging: Sensitivity-Guided Parameter Balancing for Merging Large Language Models }, author={ Shuqi Liu and Han Wu and Bowei He and Xiongwei Han and Mingxuan Yuan and Linqi Song }, journal={arXiv preprint arXiv:2502.12420}, year={ 2025 } }