Dataset Distillation using Parameter Pruning
- DD
The acquisition of advanced models relies on large datasets in many fields, which makes storing datasets and training models expensive. As a solution, dataset distillation can synthesize a small dataset that preserves most information of the original large dataset. The recently proposed dataset distillation method by matching network parameters has been proven effective for several datasets. However, the dimension of network parameters is usually large. And we found that a few parameters in the distillation process are difficult to match, which harms the distillation performance. Based on this observation, this paper proposes a new method to solve the problem using parameter pruning. The proposed method can synthesize more robust distilled datasets and improve the distillation performance by pruning difficult-to-match parameters in the distillation process. Experimental results on three datasets show that the proposed method outperformed other state-of-the-art dataset distillation methods.
View on arXiv