35
0

Forget the Data and Fine-Tuning! Just Fold the Network to Compress

Abstract

We introduce model folding, a novel data-free model compression technique that merges structurally similar neurons across layers, significantly reducing the model size without the need for fine-tuning or access to training data. Unlike existing methods, model folding preserves data statistics during compression by leveraging k-means clustering, and using novel data-free techniques to prevent variance collapse or explosion. Our theoretical framework and experiments across standard benchmarks, including ResNet18 and LLaMA-7B, demonstrate that model folding achieves comparable performance to data-driven compression techniques and outperforms recently proposed data-free methods, especially at high sparsity levels. This approach is particularly effective for compressing large-scale models, making it suitable for deployment in resource-constrained environments.

View on arXiv
@article{wang2025_2502.10216,
  title={ Forget the Data and Fine-Tuning! Just Fold the Network to Compress },
  author={ Dong Wang and Haris Šikić and Lothar Thiele and Olga Saukh },
  journal={arXiv preprint arXiv:2502.10216},
  year={ 2025 }
}
Comments on this paper