31

Towards Principled Dataset Distillation: A Spectral Distribution Perspective

Ruixi Wu
Shaobo Wang
Jiahuan Chen
Zhiyuan Liu
Yicun Yang
Zhaorun Chen
Zekai Li
Kaixin Li
Xinming Wang
Hongzhu Yi
Kai Wang
Linfeng Zhang
Main:10 Pages
5 Figures
Bibliography:4 Pages
7 Tables
Appendix:16 Pages
Abstract

Dataset distillation (DD) aims to compress large-scale datasets into compact synthetic counterparts for efficient model training. However, existing DD methods exhibit substantial performance degradation on long-tailed datasets. We identify two fundamental challenges: heuristic design choices for distribution discrepancy measure and uniform treatment of imbalanced classes. To address these limitations, we propose Class-Aware Spectral Distribution Matching (CSDM), which reformulates distribution alignment via the spectrum of a well-behaved kernel function. This technique maps the original samples into frequency space, resulting in the Spectral Distribution Distance (SDD). To mitigate class imbalance, we exploit the unified form of SDD to perform amplitude-phase decomposition, which adaptively prioritizes the realism in tail classes. On CIFAR-10-LT, with 10 images per class, CSDM achieves a 14.0% improvement over state-of-the-art DD methods, with only a 5.7% performance drop when the number of images in tail classes decreases from 500 to 25, demonstrating strong stability on long-tailed data.

View on arXiv
Comments on this paper