ImPart: Importance-Aware Delta-Sparsification for Improved Model Compression and Merging in LLMs

With the proliferation of task-specific large language models, delta compression has emerged as a method to mitigate the resource challenges of deploying numerous such models by effectively compressing the delta model parameters. Previous delta-sparsification methods either remove parameters randomly or truncate singular vectors directly after singular value decomposition (SVD). However, these methods either disregard parameter importance entirely or evaluate it with too coarse a granularity. In this work, we introduce ImPart, a novel importance-aware delta sparsification approach. Leveraging SVD, it dynamically adjusts sparsity ratios of different singular vectors based on their importance, effectively retaining crucial task-specific knowledge even at high sparsity ratios. Experiments show that ImPart achieves state-of-the-art delta sparsification performance, demonstrating higher compression ratio than baselines at the same performance level. When integrated with existing methods, ImPart sets a new state-of-the-art on delta quantization and model merging.
View on arXiv@article{yang2025_2504.13237, title={ ImPart: Importance-Aware Delta-Sparsification for Improved Model Compression and Merging in LLMs }, author={ Yan Yang and Yixia Li and Hongru Wang and Xuetao Wei and Jianqiao Yu and Yun Chen and Guanhua Chen }, journal={arXiv preprint arXiv:2504.13237}, year={ 2025 } }